Closed mytecor closed 1 year ago
That's normal. By calling hid_scan
twice in parallel, you'll try to borrow the RefCell
at https://github.com/mytecor/nucon/blob/329c38cf22bbe64bf8cbe0a53db42f20c79fd3ab/src-tauri/src/hid_wrapper.rs#L30 twice. As you can see in the stack trace, this is a crash in your code, not in refresh_devices
.
You'll want to figure out a solution for synchronization, like using a Mutex
around HidApi
. I don't know if that will block the UI thread in Tauri or not, but it won't be worse than it is now.
That's normal. By calling
hid_scan
twice in parallel, you'll try to borrow theRefCell
at https://github.com/mytecor/nucon/blob/329c38cf22bbe64bf8cbe0a53db42f20c79fd3ab/src-tauri/src/hid_wrapper.rs#L30 twice. As you can see in the stack trace, this is a crash in your code, not inrefresh_devices
.You'll want to figure out a solution for synchronization, like using a
Mutex
aroundHidApi
. I don't know if that will block the UI thread in Tauri or not, but it won't be worse than it is now.
I can't reproduce this behaviour on linux. Also this refcell in threadlocal, so multiple borrows possible only if closure called twice in one thread without dropping the refcell guard.
Thread ids the same in all calls to hid_scan and tauri uses mutex on plugin to call it. Without hid_enumerate its working fine even with sleep(...) instead of refresh_devices
I can confirm, they run sequentially on Linux. And you're right, if they ran on separate threads, they'd get different instances. On the other hand, it clearly crashes when your code calls borrow_mut
, so it's pretty strange. Do you also get the same thread id on MacOS?
I think storing it in a thread local in the first place is a bad idea, since accessing it will potentially initialize the HidApi in multiple threads, but it should only be created once per process.
use std::{cell::RefCell, thread};
use hidapi::HidApi;
thread_local! {
pub static HID: RefCell<HidApi> = HidApi::new_without_enumerate().unwrap().into()
}
fn main() {
HID.with(|cell| {
cell.borrow_mut().refresh_devices().unwrap();
});
thread::spawn(|| {
HID.with(|cell| {
cell.borrow_mut().refresh_devices().unwrap();
});
}).join().unwrap();
}
This will crash on the second HID.with() in the other thread.
And regarding the RefCell unwrap error, I do not think this is caused by hidapi. The RefCell has HID
has nothing to do with hidapi itself. The fact that you can not reproduce with sleep()
does not guarantee your code from being race free. Often its hard or straight up impossible to get the same race with slightly different code.
What would help a bit is printfs at start and end of the function with timestamp + treadid.
Is there any chance tauri::command runs the function on a different stack with e.g. green threads, coroutines or the stack of an interpreter?
In this case the thread local will be even more useless, since rusts thread api does not know about such things.
I can confirm, they run sequentially on Linux. And you're right, if they ran on separate threads, they'd get different instances. On the other hand, it clearly crashes when your code calls
borrow_mut
, so it's pretty strange. Do you also get the same thread id on MacOS?
Yep, on macos it prints thread_id: 1 every time
Afaik hidapi will panic on init if such threadlocal will be used from different threads
Do you also get the same thread id on MacOS?
Yep, the thread id is the same
After CFRunLoopRunInMode hid_scan called second time, seems like webkit uses loop for callbacks. So its not hidapi bug, ig
So I guess we can close the issue?
Hi, I'm trying to use this crate in my tauri app When I call
invoke('hid_scan')
in parallel from App.tsx, I getthread 'main' panicked at 'already borrowed: BorrowMutError', src/hid_wrapper.rs:30:28
Reproducing only on mac
RUST_BACKTRACE=full: https://gist.github.com/mytecor/89f92d9024bf4ce264371dfb4b08bdf5