ruabmbua / hidapi-rs

Rust bindings for the hidapi C library
MIT License
173 stars 82 forks source link

Accessing a device from different threads #78

Closed RudolfVonKrugstein closed 2 years ago

RudolfVonKrugstein commented 2 years ago

I want to read from a device to detect button press but also write to the device at the same time. To accomplish this, I create a thread for reading and a thread for writing.

But both threads have to access the same device. So my Idea was, I just open the device twice:

fn main() {
    let hidapi = hidapi::HidApi::new().unwrap();
    let vendor_id = 0x0fd9;
    let product_id = 0x60;

    let write_device = hidapi.open(vendor_id, product_id).unwrap();
    let read_device = hidapi.open(vendor_id, product_id).unwrap();
    let t = thread::spawn(move || {
        let mut inbuffer = vec![0; 15];

        loop {
            read_device.read(&mut inbuffer).unwrap();
        }
    });

    write_device.send_feature_report(&[0u8; 32]).unwrap();
    t.join().unwrap();
}

On windows, this works! On linux I get a HidError(HidApiError { message: "hid_error is not implemented yet" }) when opening the second device.

So I am wondering, should I do something different? Can I share the device in some other way?

ruabmbua commented 2 years ago

Sure you can. Opening it several times heavily depends on how the backend handles that, so it might not be portable. You can however just share it with Arc<Mutex<_>> across several threads.

You can also try to get it working without threads, you can set a device to non blocking mode. When you read from the device in this mode, it will just poll for new data, and return with an empty slice, in case nothing was received. Constructing a event loop around that may solve your problem,

RudolfVonKrugstein commented 2 years ago

Thanks for your answer and your time! I actually want to read and write to a device at the same time. Waiting for button event and writing to the device at the same time. That is not possible with a Mutex (or far as I understand it). And using none-blocking polling would mean I need to write a loop with basically does something like this:

loop {
  poll();
}

Which looks very inefficient to me (but I am not an experienced rust programmer, I might be wrong).

What I tried instead is putting the HidDevice into another type and do a:

unsafe impl Sync for WrapperType {};

And than I share it using a Arc<WrapperType>. It seems to work, but I wonder: Am I doing something dangerous on me? Is this code going to segfault on me on certain conditions?

amenophis commented 2 years ago

Hello @RudolfVonKrugstein It seems you are working with streamdeck devices, i'm working on it too ! Did you find a solution to read keys in a loop from a thread ?

ruabmbua commented 2 years ago

One suggestion: create a communication thread, that owns the HidDevice, and read from it in a loop there. After every loop iteration, also check a queue / channel for wanted write operations.

Pseudo rust:

fn comm_thread(device: Device, writer_chan: Receiver<>, event_chan: Sender<>) {
    let recvbuf = [0u8; 1024];
    loop {
        if let Ok(write_request) = writer_chan.try_recv() {
            device.write(write_request.data);
        }
        if let Ok(nbytes) = device.read_timeout(recvbuf, 100) {
            event_chan.send(&recvbuf[0..nbytes].iter().collect::<Vec<_>>());
        }
    }
 }

fn main() {
    let dev = ...; 
    let (thread_events, event_chan) = channel(); 
    let (writer_chan, thread_writer) = channel();
    std::thread::spawn(move || {comm_thread(dev, thread_writer, thread_events);});

    // Now you can use event_chan and writer_chan to read events from the device, and write to it. Because writer_chan is probably a mpsc, you can clone it to multiple threads that want to write. Reading has to be done from one thread.
}

Obviously this will not compile as-is and its missing error handling and a possible exit scenario.

amenophis commented 2 years ago

Thanks for the suggestion ! I didn't think about channels.

I will try and let you know

ruabmbua commented 2 years ago

Closed because of inactivity. @amenophis you can of course still reply if you want!