Closed nickmonad closed 1 year ago
I tried again to remove the sleep
but some relay threads remain open (so not completely shutdown).
I'll try to search a solution to avoid the sleep
, otherwise a solution can be to put the sleep
in a thread, so will not block the UI.
I moved the sleep inside a thread. Check if now works better
Hey @yukibtc
Yes, it does make the shutdown process feel faster, but I think it ultimately still leaves resources / relays in a bad state when the program terminates.
Here's an example of what I'm working with,
use nostr_sdk::prelude::*;
use nostr_sdk::Timestamp;
use tokio::sync::mpsc::{self, channel};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let keys = Keys::generate();
let client = Client::new(&keys);
client.add_relay("wss://relay.damus.io/", None).await?;
client.add_relay("wss://relay.nostr.band/", None).await?;
client
.add_relay("wss://nostr-pub.wellorder.net/", None)
.await?;
client.connect().await;
// filter for all new messages and reposts... will be noisy!
let filter = Filter::new()
.kinds(vec![Kind::TextNote])
.since(Timestamp::now());
client.subscribe(vec![filter]).await;
let mut events = client.notifications();
// setup quit handling
let (quit, mut ctrl_c) = channel(1);
tokio::spawn(signal(quit));
// event loop
loop {
tokio::select! {
_ = ctrl_c.recv() => {
log::warn!("\n[shutting down!]");
client.shutdown().await.unwrap();
break;
}
e = events.recv() => {
match e {
Ok(notification) => {
match notification {
RelayPoolNotification::Event(relay, e) => {
log::info!("[event!] {:?} {:?}", relay.to_string(), e.id);
},
_ => {}
}
},
Err(e) => log::error!("[error] {e:#?}"),
}
}
}
}
Ok(())
}
async fn signal(quit: mpsc::Sender<()>) {
tokio::signal::ctrl_c().await.unwrap();
quit.send(()).await.unwrap();
}
And the logs from RUST_LOG=info cargo run
[2023-04-27T15:02:05Z INFO nostr_sdk::relay] Connected to wss://relay.damus.io/
[2023-04-27T15:02:06Z INFO nostr_sdk::relay] Connected to wss://nostr-pub.wellorder.net/
[2023-04-27T15:02:06Z INFO nostr_sdk::relay] Connected to wss://relay.nostr.band/
[2023-04-27T15:02:06Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x8b68fed5a8be6097da1bd12d095aa7994557aa44170a189072e2f3d90c8709f3)
[2023-04-27T15:02:06Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0xdac13803b2d5c6e0a0652f0e288e683e885842f50a92f32de8e2fa4e11aca13f)
[2023-04-27T15:02:08Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x7c47950a1c9a6fe13429be6b68d115aba951b2679953cb07ab7309614db8da03)
[2023-04-27T15:02:09Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x7c639d2bb4d2f2d06c3edf4bfce55b0dead3c19941fb956e74ffef8c5834b001)
[2023-04-27T15:02:10Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x601875d1607dfc8d0ec2a4bd30fa187411aad19a6bb76205b14bc0bdafe22986)
[2023-04-27T15:02:12Z INFO roostr] [event!] "wss://nostr-pub.wellorder.net/" EventId(0x2237e7da3ff1c56d3b26c413beacab952b60fa17a0fbbdf492fd66b1bb86443c)
[2023-04-27T15:02:12Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x020207bc83d44dfb283a4286b14fbe19a86a7179d52ccdf1c971e0fdb00230ea)
[2023-04-27T15:02:12Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x4f6257df1ca5c3d95e74f22aca3f5db5ef1cb20830c925890cb05eefbc95b009)
[2023-04-27T15:02:13Z INFO roostr] [event!] "wss://relay.damus.io/" EventId(0x77b621b1b2ea24c58b94c321c646ca9f6ebdcb9aa5b82b53f4d57607318ebb86)
^C[2023-04-27T15:02:13Z WARN roostr]
[shutting down!]
Now there's effectively a race condition between shutting down and the relay resource being disconnected properly. So sometimes, you will see things like,
[2023-04-27T15:04:38Z INFO nostr_sdk::relay] Completely disconnected from wss://relay.damus.io/
But not always. This implies to me that relays won't always be disconnected from cleanly.
Do I need to wait for a relay pool message after calling client.shutdown()
?
The shutdown method send a messages to relays thread saying to exit from threads and to not reconnect. But sometimes this takes a little more time.
I thought this methods for apps, for example, where you have a logout
button to change account, to avoid to keep relays threads running.
For CLI apps, or for your example, probably the shutdown method call is not needed.
That's fair. Feel free to close this issue!
I noticed that while running a
Client
from thenostr-sdk
crate and callingclient.shutdown()
when a signal is received, there is a delay in the shutdown process. Looking at the code, I saw thethread::sleep
for 3 seconds here: https://github.com/rust-nostr/nostr/blob/master/crates/nostr-sdk/src/relay/pool.rs#L489Why is that? If it's to wait for resource cleanup elsewhere, is there a possible path to wire that up using channels? It's not a huge issue, just creates a bit of an odd UX when closing a terminal-based client. Ideally, when the user is ready to quit the client, it exits as soon as possible.
I'd be happy to look into improving this, but I wanted to make sure I understand the motivation for it in the first place.