ruabmbua / hidapi-rs

Rust bindings for the hidapi C library
MIT License
165 stars 79 forks source link

Assertion / Illegal instruction on macOS when calling `HidApi::new` in different threads #127

Open Tiwalun opened 12 months ago

Tiwalun commented 12 months ago

I've run into an issue on macOS where calling HidApi::new from different threads leads to an abort of the process, with an exception for an illegal instruction.

Looking at the stacktrace, it seems to be cause by a CFAssertMismatchedTypeID assertion somewhere in Core Foundation.

Are there any special precautions that have to be taken when using hidapi from multiple threads? I couldn't find anything in the documentation.

Stacktrace

* thread #5, name = 'try 1', stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
  * frame #0: 0x00007ff80cf83f3e CoreFoundation`_CFAssertMismatchedTypeID + 110
    frame #1: 0x00007ff80ce254bd CoreFoundation`CFRunLoopAddSource + 973
    frame #2: 0x00007ff80f76e305 IOKit`IOHIDDeviceScheduleWithRunLoop + 74
    frame #3: 0x00007ff80f772651 IOKit`__IOHIDManagerDeviceApplier + 527
    frame #4: 0x00007ff80f735537 IOKit`__IOHIDManagerDeviceAdded + 766
    frame #5: 0x00007ff80f73513b IOKit`__IOHIDManagerSetDeviceMatching + 347
    frame #6: 0x0000000100010724 hidapi-repro`hid_enumerate(vendor_id=0, product_id=0) at hid.c:654:2
    frame #7: 0x000000010000ce03 hidapi-repro`hidapi::hidapi::HidApiBackend::get_hid_device_info_vector::hdb756fded0153d2d at hidapi.rs:25:36
    frame #8: 0x000000010000dbde hidapi-repro`hidapi::HidApi::new::h471133b553ec121f at lib.rs:171:27
    frame #9: 0x0000000100006141 hidapi-repro`hidapi_repro::enumerate::h2a623970c0b3f1ae at main.rs:16:13

Repo with code to reproduce: https://github.com/tiwalun/hidapi-repro

ruabmbua commented 12 months ago

Hi! Because of the not very well defined nature of the underlying C library (regarding multithreading), we used to prevent creating a second instance of a Hidapi object, by acquiring a lock. However people started requesting that I lift this restriction, and testing on the backends that we could test showed, that as long as you open a device not from multiple threads, everything works out fine.

Unfortunately we may have not tested on macos (due to nobody owning a mac). I recommend you look at the hidapi C library if it should be possible to use it from multiple threads in macos.

B.t.w. it could totally be, that the problem only occurs because you are not on the main thread. I have some experience in the far past with macos not allowing certain things on a normal spawned thread (IO related & gpu related).

If it really can not work, I will probably add a check just for macos to not allow creating more than one API instance.

Tiwalun commented 11 months ago

Unfortunately, running everything on the main thread is not possible for my use case.

I've had a look at the library, and found some existing issues which might be related:

I've tried to use hid_exit from the underlying library to reset the global variables, and that has helped for the simple case in the reproducer I shared. Unfortunately, it doesn't help in the actual application :(.

The workaround I have for now is to spawn a dedicated thread for hidapi, run HidApi::new() in there, and keep it alive for the whole duration of the process. With that, it seems possible to use hidapi from other threads without issues afterwards :man_shrugging:.

My current assumption is that this line https://github.com/libusb/hidapi/blob/d0856c05cecbb1522c24fd2f1ed1e144b001f349/mac/hid.c#L443 associates the IOHidManager with the run loop of the thread calling hid_init, and because of this the thread has to stay alive as long as hidapi is used, otherwise the run loop doesn't exist anymore, and then an assertion is triggered.

I'll hopefully find some time to work on a proper fix for this, either directly in hidapi, or just adding a pure Rust backend to hidapi-rs...

micolous commented 10 months ago

That's correct, IOHIDManager needs to be alive and associated with an active CFRunLoop while any IOHIDDevice instance it created is still open or in use.

When IOHIDManager is unscheduled, it also unschedules all its IOHIDDevices, when closed it closes all devices, and when released, it releases all devices. Unlike most Core Foundation objects, there's no reference counting - IOHIDManager just tears everything apart once it's done, and there's no way to control that.

This is much easier to manage from Rust (#30), where you could have an Arc-wrapped IOHIDManager referenced by every Device instance.

andresv commented 5 months ago

Stumbled on the same issue when using hidapi 2.4.1. In my application new thread is created each time when somebody wants to do something with the device.

* thread #12, stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)
  * frame #0: 0x00007ff80705540e CoreFoundation`_CFAssertMismatchedTypeID + 110
    frame #1: 0x00007ff806eddba5 CoreFoundation`CFRunLoopAddSource + 973
    frame #2: 0x00007ff809e7c38d IOKit`IOHIDDeviceScheduleWithRunLoop + 89
    frame #3: 0x00007ff809e80b7e IOKit`__IOHIDManagerDeviceApplier + 555
    frame #4: 0x00007ff809e44179 IOKit`__IOHIDManagerDeviceAdded + 810
    frame #5: 0x00007ff809e43d59 IOKit`__IOHIDManagerSetDeviceMatching + 378
    frame #6: 0x00007ff809e43b83 IOKit`IOHIDManagerSetDeviceMatchingMultiple + 255
    frame #7: 0x0000000100ba9e04 um6dbench`hid_enumerate(vendor_id=0, product_id=0) at hid.c:733:2
    frame #8: 0x0000000100ba4e93 um6dbench`hidapi::hidapi::HidApiBackend::get_hid_device_info_vector::h1a0ac8ae93d5a3eb at hidapi.rs:25:36
    frame #9: 0x0000000100ba204e um6dbench`hidapi::HidApi::new::h43ba44533b9d7c18 at lib.rs:171:27
    frame #10: 0x00000001003fe7d3 um6dbench`probe_rs::probe::cmsisdap::tools::list_cmsisdap_devices::h9510f0612ea21ccc at tools.rs:31:22
    frame #11: 0x0000000100512305 um6dbench`probe_rs::probe::list::AllProbesLister::list_all::hc932401596101843 at list.rs:126:24
    frame #12: 0x00000001005122b8 um6dbench`_$LT$probe_rs..probe..list..AllProbesLister$u20$as$u20$probe_rs..probe..list..ProbeLister$GT$::list_all::hb4edfc444d6a8d14(self=0x0000000000000001) at list.rs:71:9
    frame #13: 0x0000000100512250 um6dbench`probe_rs::probe::list::Lister::list_all::h7c0fbc27f9e18c8f(self=0x000070000f3709a0) at list.rs:41:9

So I added another thread to main.rs to just hold a hidapi as @Tiwalun suggested and after that I do not see SIGILL animore.

std::thread::spawn(|| {
    let _hid = hidapi::HidApi::new();
    loop {
        std::thread::sleep(std::time::Duration::from_secs(3600));
    }
});