Closed yamadapc closed 3 months ago
Hey there!
I've just tried this on Sonoma 14.3.1
, and was not able to reproduce the issue. What version of zeroconf
are you on? There was an issue with components not being disposed of correctly causing segfaults, but I believe it should have been fixed.
Please let me know if you are still experiencing this on the latest release which is 0.14.1
. When you do, please include your system information and any other relevant information. Thank you!
I'm going to go ahead and close this. Please feel free to re-open if you feel your issue has not been resolved.
Hey sorry for the delay.
I face this issue on Sonoma 14.2.1
It happens on the latest version of zero, 0.14.1
Thank you for bringing this to my attention. I will look into this deeper ASAP, or, please feel free to try and diagnose the issue yourself if you feel inclined.
Hey @yamadapc,
Weirdly, I still cannot reproduce this locally, despite the test clearly showing a segfault in GitHub Actions. I've tried this on two MacBook Pros now, and the test passes in both cases. Can we compare system details?
MacBook 1:
$ rustup show
info: syncing channel updates for 'stable-aarch64-apple-darwin'
info: checking for self-update
stable-aarch64-apple-darwin unchanged - rustc 1.77.1 (7cf61ebde 2024-03-27)
info: cleaning up downloads & tmp directories
$ uname -a
Darwin REMWZC02MBP.nyc.rr.com 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:25 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6030 arm64
MacBook 2:
$ uname -a
Darwin [Walkers-MBP.nyc.rr.com](http://walkers-mbp.nyc.rr.com/) 23.1.0 Darwin Kernel Version 23.1.0: Mon Oct 9 21:28:12 PDT 2023; root:xnu-10002.41.9~6/RELEASE_ARM64_T8103 arm64
$ rustup show
Default host: aarch64-apple-darwin
rustup home: /Users/walker/.rustup
installed toolchains
--------------------
stable-aarch64-apple-darwin (default)
nightly-aarch64-apple-darwin
active toolchain
----------------
stable-aarch64-apple-darwin (default)
rustc 1.77.1 (7cf61ebde 2024-03-27)
If you have some free time to investigate, I would really appreciate it, since I cannot reproduce this locally. Thanks.
Hey, for sure; let me try to investigate and contribute back. It's not a big deal as there's a work-around, just nice to track.
I'm going to close this for now, please feel free to re-open if this is still an issue for you.
The following safe code will cause a segmentation fault on the bounjour back-end:
The segfault happens when accessing
BounjourServiceContext::registered_callback
: