-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kill watchers on app close. #7
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❌ Changes requested. Reviewed everything up to 2d69626 in 2 minutes and 52 seconds
More details
- Looked at
167
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
0
drafted comments based on config settings.
Workflow ID: wflow_YhM6VmBEyVxbZXt4
Want Ellipsis to fix these issues? Tag @ellipsis-dev
in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
⌛ 10 days left in your free trial, upgrade for $20/seat/month or contact us.
src-tauri/src/main.rs
Outdated
let mut manager_state = manager_state.lock().unwrap(); | ||
manager_state.alive = false; | ||
drop(manager_state); | ||
sleep(Duration::from_millis(500)); //TODO: revisit this |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using sleep
to delay application exit can lead to a poor user experience. Consider using a more responsive approach to ensure all processes are terminated before exiting, such as checking the status of child processes or using event-driven signaling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a few ideas for getting rid of this... Having a variable that tracks number of running watchers. It's a bit tricky to implement as I have drop the mutex so that it's acquired by the start_watcher threads, Then reacquire it and wait till the count is zero before proceeding. Will try it tomorrow!
Handles like a dream... I'm so impressed how it came together! |
src-tauri/src/manager.rs
Outdated
} | ||
|
||
impl ManagerState { | ||
fn new() -> ManagerState { | ||
ManagerState { | ||
watchers_running: HashMap::new(), | ||
watcher_count: 0, | ||
alive: true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused, not quite true, but I don't think it should be needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They are later used. watcher_count keeps track of number of running watchers and alive whether the parent process is ready to exit. I will refactor to discard watcher_count
src-tauri/src/manager.rs
Outdated
} | ||
|
||
impl ManagerState { | ||
fn new() -> ManagerState { | ||
ManagerState { | ||
watchers_running: HashMap::new(), | ||
watcher_count: 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can be computed by counting the number of true
in the watchers_running. You could write it as a helper function on ManagerState.
src-tauri/src/manager.rs
Outdated
name: name.to_string(), | ||
output: std::process::Output { | ||
status, | ||
stdout: Vec::new(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This discards the stdout/stderr from the process, which is a regression.
src-tauri/src/manager.rs
Outdated
output, | ||
}) | ||
.unwrap(); | ||
loop { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loop doesn't loop clean to me, what are you looping over? When does the looping end?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loops for the entirety of the program, checks whether the child process is still alive. If it is not sends the output to handle (there was a regression here due to me fighting the borrow checker) and breaks from the loop. The other arm of the match is executed every loop the process is alive. checks if the variable alive is still true. If not kills the process and exits the loop.
src-tauri/src/manager.rs
Outdated
let mut state = state.lock().unwrap(); | ||
if !state.alive { | ||
println!("Killing {name}"); | ||
child.as_mut().unwrap().kill().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a lot of unwraps. These need to be handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do!
src-tauri/src/manager.rs
Outdated
thread::spawn(move || { | ||
// Start the child process | ||
let path = name; | ||
let args = ["--testing", "--port", "5699"]; | ||
let child = Command::new(path) | ||
let mut child = Command::new(path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why does this need to be mut
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was me fighting with the borrow checker. It wouldn't work without mut
src-tauri/src/manager.rs
Outdated
// Wait for the child to exit | ||
let output = child | ||
.unwrap() | ||
.wait_with_output() | ||
.expect("failed to wait on child"); | ||
|
||
// Send the process output to the manager | ||
tx.send(WatcherMessage::Stopped { | ||
name: name.to_string(), | ||
output, | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just an idea: maybe we could keep this, since it's pretty clean, and instead return the process PID in WatcherMessage::Started
to be stored in ManagerState
, which is then stopped by sending a SIGTERM to the PID from the main thread?
Might remove the need for polling the state, and makes the start_watcher
not need the state
parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would greatly simplify everything. Let me get started on it and see!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the off chance, the process exits and it's not reported to start_manager
sending a SIGTERM could potentially kill a random process. "The chances are near zero", Oppenheimer reference. I think the reporting is near realtime but there could be a chance? I still don't think it's an issue really just a random thought.
Done. Your way is so much cleaner. |
Libc is not natively supported in windows it seems. |
It builds successfully. I haven't tested it on windows yet! Hopefully it works. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is indeed a lot cleaner, nice!
* chore: cargo fmt * chore: Update Cargo.lock and Cargo.toml with nix and winapi * refactor: Update manager.rs to send SIGTERM to watchers on app exit * chore: bug fix * feat: bug fix
This took longer than I wished it did. Learnt so much about how rust handles concurrency. I experimented with using channels but that would not work because Rust does not allow for multiple receivers to the same channel. I came up with this solution. Does not use a condition variable though. It could be improved I'll revisit it later when I'm better at rust concurrency.
Summary:
This PR ensures that all watcher processes are terminated when the application is closed by introducing a new
alive
flag inManagerState
and checking this flag in the watcher's loop.Key points:
alive
flag toManagerState
to track if the app is closing.start_watcher
inmanager.rs
to terminate watchers ifalive
is false.alive
to false inon_tray_event
inmain.rs
when 'quit' is clicked.build.rs
.Generated with ❤️ by ellipsis.dev