weihanglo / sfz

A simple static file serving command-line tool written in Rust.
https://crates.io/crates/sfz
Apache License 2.0
400 stars 30 forks source link

Better integration #62

Closed sayanarijit closed 1 year ago

sayanarijit commented 3 years ago

Hi, This is a nice tool. I'd like to integrate it with xplr. But as you can see, currently I have to use 2 workarounds:

So, for better integration, is there any way we can find the print the actual system IP and map quit with some other key?

weihanglo commented 3 years ago

Thank for your appreciation!

weihanglo commented 3 years ago

By the way, xplr is very cool! Looking forward trying it in my daily workflow 😀

sayanarijit commented 3 years ago

Hey, thanks.

As for the IP, I did a quick search but also failed to find a reliable solution. So I looked into the qrcp code and found this hack. (With this sfz doesn't have to do this). xplr will run sfz with the selected ip.

ADDR=$(ip addr | grep -w inet | cut -d/ -f1 | grep -Eo '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' | fzf --prompt 'Select IP > ')

For the exit behavior I understand it will add unnecessary complexity to the simple server. I was hoping there could be some easier way than reading the key inputs using another crate. I think it'd be a nice project idea to create a command / process manager that will act like a proxy between the user inputs and another interactive program.

Something like

proxycli sfz --mapkey q:ctrl-c
weihanglo commented 3 years ago

For the keybinds, unfortunately, it seems that stdin in most OSs is blocking, so that means we need spawn another thread to read stdin. However we cannot guarantee when will Read::read returns, so one want to exit must send Ctrl-D (EOF) explicitly to flush the buffer. Does this behavior you're looking for?

To receive key events precisely, we need to handle tty stuff using crates such as ncurse or tui-rs. I am not familiar with this area and this seems somewhat overkill for me.

BTW, below is the tiny research I did.

#[tokio::main(flavor = "multi_thread")]
async fn main() {
    let (tx, rx) = tokio::sync::oneshot::channel();
    let handle = tokio::spawn(async {
        use std::io::Read;
        let mut stdin = std::io::stdin();
        let mut ch = [0];
        while let Ok(_) = stdin.read_exact(&mut ch) {
            let c = ch[0] as char;
            if c == 'q' || c == 'Q' {
                tx.send(()).unwrap();
                break;
            }
        }
    });
    Args::parse(matches())
        .map(|args| async {
            tokio::join!(
                serve(args, async { rx.await.ok(); }),
                handle
            ).0
        })
        .unwrap_or_else(handle_err)
        .await
        .unwrap_or_else(handle_err);
}
sayanarijit commented 3 years ago

Great. I'll try it tomorrow. Though I don't have much experience with tokio.

weihanglo commented 3 years ago

Full diff here though IMO not elegant 😂 And actually i forgot to add an outer loop to re-read stdin, but then.

sayanarijit commented 3 years ago

Awesome, it works. But I think we need to inform users to use ctrl+d for graceful shutdown. Also, I think we can remove the q handling as it's not intuitive enough because we have to press enter after q which took me some time figure out.

weihanglo commented 3 years ago

v0.5.0 has just released!

sayanarijit commented 3 years ago

Closing this.

weihanglo commented 3 years ago

Due to #66 the change will be reverted in the next release. Sorry for the inconvenience 😞

sayanarijit commented 3 years ago

Ah ok. I'll try to find some other solution. Not sure how qrcp did it using go.

EDIT: fixed the link.

weihanglo commented 3 years ago

It seems that qrcp does not support background job control neither, but it is more like a oneshot utility so that would be more acceptable than sfz. If you come up with new solution please share it here!

sayanarijit commented 3 years ago

Sorry the correct link is https://github.com/claudiodangelis/qrcp/issues/198.

weihanglo commented 1 year ago

Thank you for the PR. Unfortunately I have no time working on it. See #108.