ros2 / rmw_zenoh

RMW for ROS 2 using Zenoh as the middleware
Apache License 2.0
175 stars 34 forks source link

rmw_destroy_guard_condition segfault #257

Open CihatAltiparmak opened 1 month ago

CihatAltiparmak commented 1 month ago

Versions

rmw_zenoh : fcfed30ca7ba629a137bc48fab0502c71e515839

Description

Hello, I'm trying to benchmark the middleware implementations like rmw_zenoh, rmw_cyclonedds and rmw_fastrtps. I have implemented some benchmark codes for comparing how middlewares behave at basic topic subscription-publishing scenario. My benchmark code exits with SIGSEV exit code whereas rmw_fastrtps exits cleanly. Here is backtrace. Seems problem occurs while trying to deallocate some data.

Bactrace:

(gdb) backtrace 
#0  0x0000000000000000 in ?? ()
#1  0x00007ffff6363aac in rmw_destroy_guard_condition (
    guard_condition=0x555555943e90)
    at /home/cihat/ws_moveit/src/rmw_zenoh/rmw_zenoh_cpp/src/rmw_zenoh.cpp:3282
#2  0x00007ffff7f253e0 in rcl_guard_condition_fini ()
   from /opt/ros/rolling/lib/librcl.so
#3  0x00007ffff7d28191 in rclcpp::GuardCondition::~GuardCondition() ()
   from /opt/ros/rolling/lib/librclcpp.so
#4  0x00007ffff7cf3bea in ?? () from /opt/ros/rolling/lib/librclcpp.so
#5  0x00007ffff7d02731 in rclcpp::Executor::~Executor() ()
   from /opt/ros/rolling/lib/librclcpp.so
#6  0x000055555578d473 in __gnu_cxx::new_allocator<rclcpp::executors::SingleThreadedExecutor>::destroy<rclcpp::executors::SingleThreadedExecutor> (
    this=0x555555947f20, __p=0x555555947f20)
    at /usr/include/c++/11/ext/new_allocator.h:168
#7  0x000055555578b3fd in std::allocator_traits<std::allocator<rclcpp::executors::SingleThreadedExecutor> >::destroy<rclcpp::executors::SingleThreadedExecutor>
    (__a=..., __p=0x555555947f20)
    at /usr/include/c++/11/bits/alloc_traits.h:535
#8  0x000055555578742f in std::_Sp_counted_ptr_inplace<rclcpp::executors::SingleThreadedExecutor, std::allocator<rclcpp::executors::SingleThreadedExecutor>, (__gnu_cxx::_Lock_policy)2>::_M_dispose (this=0x555555947f10)
    at /usr/include/c++/11/bits/shared_ptr_base.h:528
--Type <RET> for more, q to quit, c to continue without paging--
#9  0x000055555572e53d in std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release (this=0x555555947f10) at /usr/include/c++/11/bits/shared_ptr_base.h:168
#10 0x000055555572e2db in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::~__shared_count (this=0x5555558346d8, __in_chrg=<optimized out>)
    at /usr/include/c++/11/bits/shared_ptr_base.h:705
#11 0x0000555555735c62 in std::__shared_ptr<rclcpp::executors::SingleThreadedExecutor, (__gnu_cxx::_Lock_policy)2>::~__shared_ptr (this=0x5555558346d0, 
    __in_chrg=<optimized out>)
    at /usr/include/c++/11/bits/shared_ptr_base.h:1154
#12 0x0000555555735c82 in std::shared_ptr<rclcpp::executors::SingleThreadedExecutor>::~shared_ptr (this=0x5555558346d0, __in_chrg=<optimized out>)
    at /usr/include/c++/11/bits/shared_ptr.h:122
#13 0x0000555555735ce8 in moveit::middleware_benchmark::ScenarioBasicSubPubFixture::~ScenarioBasicSubPubFixture (this=0x5555558345f0, 
    __in_chrg=<optimized out>)
    at /home/cihat/ws_moveit/src/moveit2_stack/moveit_middleware_benchmark/include/moveit_middleware_benchmark/scenarios/scenario_basic_subscription.hpp:70
#14 0x0000555555783544 in moveit::middleware_benchmark::ScenarioBasicSubPubFixture_test_scenario_basic_sub_pub_Benchmark::~ScenarioBasicSubPubFixture_test_scenario_basic_sub_pub_Benchmark (this=0x5555558345f0, __in_chrg=<optimized out>)
    at /home/cihat/ws_moveit/src/moveit2_stack/moveit_middleware_benchmark/src/scenarios/scenario_basic_subscription.cpp:107
#15 0x0000555555783564 in moveit::middleware_benchmark::ScenarioBasicSubPubFixtu--Type <RET> for more, q to quit, c to continue without paging--
re_test_scenario_basic_sub_pub_Benchmark::~ScenarioBasicSubPubFixture_test_scenario_basic_sub_pub_Benchmark (this=0x5555558345f0, __in_chrg=<optimized out>)
    at /home/cihat/ws_moveit/src/moveit2_stack/moveit_middleware_benchmark/src/scenarios/scenario_basic_subscription.cpp:107
#16 0x00007ffff7f78daf in ?? () from /opt/ros/rolling/lib/libbenchmark.so.1
#17 0x00007ffff7445495 in __run_exit_handlers (status=0, 
    listp=0x7ffff761a838 <__exit_funcs>, 
    run_list_atexit=run_list_atexit@entry=true, run_dtors=run_dtors@entry=true)
    at ./stdlib/exit.c:113
#18 0x00007ffff7445610 in __GI_exit (status=<optimized out>)
    at ./stdlib/exit.c:143
#19 0x00007ffff7429d97 in __libc_start_call_main (
    main=main@entry=0x55555572de89 <main(int, char**)>, argc=argc@entry=13, 
    argv=argv@entry=0x7fffffffaa68)
    at ../sysdeps/nptl/libc_start_call_main.h:74
#20 0x00007ffff7429e40 in __libc_start_main_impl (
    main=0x55555572de89 <main(int, char**)>, argc=13, argv=0x7fffffffaa68, 
    init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, 
    stack_end=0x7fffffffaa58) at ../csu/libc-start.c:392
#21 0x000055555572ddc5 in _start ()

The crashed code line:

https://github.com/ros2/rmw_zenoh/blob/c12ff3ed2b73169ff0bd7a2a341f8911082fc1e9/rmw_zenoh_cpp/src/rmw_zenoh.cpp#L3282

How to reproduce

I actually work on source installation, but maybe you want to try it. Ping me in any case you struggle, because i haven't run this docker for long time.

If you want to install from source, you can use this repo (https://github.com/CihatAltiparmak/moveit_middleware_benchmark/tree/feature/benchmark_simple_sub_pub_topics)

Pull this docker image

docker pull ghcr.io/cihataltiparmak/moveit_middleware_benchmark:latest

inside docker:

Terminal 1

cd ws_moveit/src/moveit_middleware_benchmark
git checkout feature/benchmark_simple_sub_pub_topics
cd ../../
colcon build --packages-select rmw_zenoh_cpp moveit_middleware_benchmark --cmake-args -DCMAKE_BUILD_TYPE=Debug
source /opt/ros/rolling/setup.bash
source install/setup.bash
export RMW_IMPLEMENTATION=rmw_zenoh_cpp
# don't forget to run zenoh_router in Terminal 2
ros2 launch moveit_middleware_benchmark scenario_basic_subscription_benchmark.launch.py

Terminal 2

docker exec -it <your_container_id> bash
# inside docker
export RMW_IMPLEMENTATION=rmw_zenoh_cpp
ros2 run rmw_zenoh_cpp rmw_zenohd

Additional information:

It would be better for you to change default scenario_basic_subscription_benchmark.launch.py codes with https://gist.githubusercontent.com/CihatAltiparmak/a22c4963223b88e49ff29fa47350bb57/raw/5686664dacc8a5c32babd14edf1d917f4b8c6068/scenario_basic_subscription_benchmark.launch.py in order to debug and to decrease benchmark repetitions from 6 to 2.

CihatAltiparmak commented 1 month ago

After some investigation, i found that &guard_condition->context->options.allocator->deallocate; is NULL and rmw_zenoh tries to use allocator->deallocate(guard_condition->data, allocator->state); in order to shutdown node.

Screenshot from 2024-08-05 11-10-59

Yadunund commented 1 month ago

Thanks for reporting this issue. I will try to reproduce it. In general I think we need to update our implementation throughout to check that the allocator is valid using RCUTILS_CHECK_ALLOCATOR before attempting to use it.

CihatAltiparmak commented 1 month ago

Hello @Yadunund , I feel like it's not enough to add RCUTILS_CHECK_ALLOCATOR to rmw_destroy_guard_condition(maybe it should not be added.) . Because i guess there are still some dataraces and this dataraces give rise to write to some forbidden sections of memory. This is one of the backtrace logs. I have a lot of backtrace log. Some of them points out rmw_destroy_node and some of them is related to sub_data_handler. I have seen your PR's. Let's be patient. After merging your PRs about the thread safety, i will try rmw_zenoh on my code again and i will give feedback. I found an error in my code but this error isn't triggered at other middleware implementation.(rmw_fastrtps and rmw_cyclonedds). The part is here. (It should be added executor_.reset()) . I will send other backtrace logs after a detailed look. Let's wait your PRs.

[INFO] [1723137814.597346998] [benchmark_main]: Subscribing to topic : /benchmarked_topic1 with hz 10000
[INFO] [1723137814.599325017] [benchmark_main]: Successfully subscribed to topic /benchmarked_topic1 with hz 10000! When received msg number is bigger than 100, benchmark will be finished!
malloc(): unsorted double linked list corrupted

Thread 10 "rx-0" received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffe77fb640 (LWP 57183)]
__pthread_kill_implementation (no_tid=0, signo=6, threadid=140737077294656) at ./nptl/pthread_kill.c:44
44      ./nptl/pthread_kill.c: No such file or directory.
(gdb) backtrace 
#0  __pthread_kill_implementation (no_tid=0, signo=6, threadid=140737077294656)
    at ./nptl/pthread_kill.c:44
#1  __pthread_kill_internal (signo=6, threadid=140737077294656)
    at ./nptl/pthread_kill.c:78
#2  __GI___pthread_kill (threadid=140737077294656, signo=signo@entry=6)
    at ./nptl/pthread_kill.c:89
#3  0x00007ffff7442476 in __GI_raise (sig=sig@entry=6)
    at ../sysdeps/posix/raise.c:26
#4  0x00007ffff74287f3 in __GI_abort () at ./stdlib/abort.c:79
#5  0x00007ffff7489676 in __libc_message (action=action@entry=do_abort, 
    fmt=fmt@entry=0x7ffff75dbb77 "%s\n") at ../sysdeps/posix/libc_fatal.c:155
#6  0x00007ffff74a0cfc in malloc_printerr (
    str=str@entry=0x7ffff75dec48 "malloc(): unsorted double linked list corrupted") at ./malloc/malloc.c:5664
#7  0x00007ffff74a42dc in _int_malloc (av=av@entry=0x7fffc8000030, 
    bytes=bytes@entry=17) at ./malloc/malloc.c:4010
#8  0x00007ffff74a5139 in __GI___libc_malloc (bytes=17)
    at ./malloc/malloc.c:3329
#9  0x00007ffff78ae98c in operator new(unsigned long) ()
   from /lib/x86_64-linux-gnu/libstdc++.so.6
#10 0x000055555572e410 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_construct<char const*> (this=0x7fffe77eedf0, 
    __beg=0x7ffff63701be "source_timestamp", __end=0x7ffff63701ce "")
--Type <RET> for more, q to quit, c to continue without paging--
    at /usr/include/c++/11/bits/basic_string.tcc:219
#11 0x00007ffff6307a99 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::basic_string<std::allocator<char> > (
    this=0x7fffe77eedf0, __s=0x7ffff63701be "source_timestamp", __a=...)
    at /usr/include/c++/11/bits/basic_string.h:539
#12 0x00007ffff6339735 in rmw_zenoh_cpp::sub_data_handler (
    sample=0x7fffe77ef190, data=0x55555592ab30)
    at /home/cihat/ws_moveit/src/rmw_zenoh/rmw_zenoh_cpp/src/detail/rmw_data_types.cpp:474
#13 0x00007ffff488f700 in zenohcd::closures::sample_closure::z_closure_sample_call (closure=0x55555592ba40, sample=0x7fffe77ef190)
    at /home/cihat/ws_moveit/build/zenoh_c_vendor/zenoh_c_vendor-prefix/src/zenoh_c_vendor/src/closures/sample_closure.rs:52
#14 0x00007ffff48bb212 in zenohcd::subscriber::z_declare_subscriber::{closure#0} (sample=...)
    at /home/cihat/ws_moveit/build/zenoh_c_vendor/zenoh_c_vendor-prefix/src/zenoh_c_vendor/src/subscriber.rs:189
#15 0x00007ffff4bba88f in zenoh::session::Session::handle_data (
    self=0x555555843f00, local=false, key_expr=0x7fffe77f2f38, info=..., 
    payload=..., attachment=...) at src/session.rs:1712
#16 0x00007ffff4bc679c in zenoh::session::{impl#13}::send_push (
    self=0x555555843f00, msg=...) at src/session.rs:2240
#17 0x00007ffff4bcf80b in zenoh::session::{impl#16}::send_push (
--Type <RET> for more, q to quit, c to continue without paging--
    self=0x555555843f00, 
    msg=<error reading variable: Cannot access memory at address 0xe047>)
    at src/session.rs:2706
#18 0x00007ffff4a95d36 in zenoh::net::routing::dispatcher::pubsub::full_reentrant_route_data (tables_ref=0x7fffec13e920, face=0x7fffec0111a0, 
    expr=0x7fffe77f4e08, ext_qos=..., ext_tstamp=..., payload=..., 
    routing_context=0) at src/net/routing/dispatcher/pubsub.rs:494
#19 0x00007ffff4a16048 in zenoh::net::routing::dispatcher::face::{impl#4}::send_push (self=0x7fffec13e920, msg=...) at src/net/routing/dispatcher/face.rs:269
#20 0x00007ffff4c64bd0 in zenoh::net::primitives::demux::{impl#1}::handle_message (self=0x7fffec13e920, msg=...) at src/net/primitives/demux.rs:68
#21 0x00007ffff4b2b48e in zenoh::net::runtime::{impl#6}::handle_message (
    self=0x7fffec13a5a0, msg=...) at src/net/runtime/mod.rs:385
#22 0x00007ffff4de85a9 in zenoh_transport::unicast::universal::transport::TransportUnicastUniversal::trigger_callback (self=0x7fffec13f040, callback=..., 
    msg=...) at src/unicast/universal/rx.rs:49
#23 0x00007ffff4de98be in zenoh_transport::unicast::universal::transport::TransportUnicastUniversal::handle_frame (self=0x7fffec13f040, frame=...)
    at src/unicast/universal/rx.rs:100
#24 0x00007ffff4dec75c in zenoh_transport::unicast::universal::transport::TransportUnicastUniversal::read_messages (self=0x7fffec13f040, batch=..., 
    link=0x7fffec13f108) at src/unicast/universal/rx.rs:204
#25 0x00007ffff4e7c32b in zenoh_transport::unicast::universal::link::rx_task::{a--Type <RET> for more, q to quit, c to continue without paging--
sync_fn#0} () at src/unicast/universal/link.rs:266
#26 0x00007ffff4e78bcb in zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block#0} () at src/unicast/universal/link.rs:124
#27 0x00007ffff4df78e4 in tokio_util::task::task_tracker::{impl#8}::poll<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}> (
    self=..., cx=0x7fffe77f86e0)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-util-0.7.10/src/task/task_tracker.rs:669
#28 0x00007ffff4e47c5f in tokio::runtime::task::core::{impl#6}::poll::{closure#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (
    ptr=0x7fffec13eeb0)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/core.rs:328
#29 0x00007ffff4e443bf in tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>>>::with_mut<tokio::runtime::task::core::Stage<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>>, core::task::poll::Poll<()>, tokio::runtime::task::core::{impl#6}::poll::{closure_env#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync--Type <RET> for more, q to quit, c to continue without paging--
::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> (self=0x7fffec13eeb0, f=...)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/loom/std/unsafe_cell.rs:16
#30 tokio::runtime::task::core::Core<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::poll<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=0x7fffec13eea0, cx=...)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/core.rs:317
#31 0x00007ffff4d796e5 in tokio::runtime::task::harness::poll_future::{closure#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> ()
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/harness.rs:485
#32 0x00007ffff4e30b64 in core::panic::unwind_safe::{impl#23}::call_once<core::task::poll::Poll<()>, tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runt--Type <RET> for more, q to quit, c to continue without paging--
ime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>> (
    self=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/core/src/panic/unwind_safe.rs:272
#33 0x00007ffff4ed3896 in std::panicking::try::do_call<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>, core::task::poll::Poll<()>> (data=0x7fffe77f8858)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panicking.rs:552
#34 0x00007ffff4edc45b in __rust_try ()
   from /home/cihat/ws_moveit/install/zenoh_c_vendor/opt/zenoh_c_vendor/lib/libzenohcd.so
#35 0x00007ffff4ed1808 in std::panicking::try<core::task::poll::Poll<()>, core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>> (f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panicking.rs:516
--Type <RET> for more, q to quit, c to continue without paging--
#36 0x00007ffff4e1ad4b in std::panic::catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>>, core::task::poll::Poll<()>> (f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panic.rs:142
#37 0x00007ffff4d7231f in tokio::runtime::task::harness::poll_future<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (core=0x7fffec13eea0, 
    cx=...)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/harness.rs:473
#38 0x00007ffff4d7baf9 in tokio::runtime::task::harness::Harness<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::poll_inner<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=0x7fffe77f8a70)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.--Type <RET> for more, q to quit, c to continue without paging--
36.0/src/runtime/task/harness.rs:208
#39 0x00007ffff4d83667 in tokio::runtime::task::harness::Harness<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::poll<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=...)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/harness.rs:153
#40 0x00007ffff4e2222d in tokio::runtime::task::raw::poll<tokio_util::task::task_tracker::TrackedFuture<zenoh_transport::unicast::universal::link::{impl#0}::start_rx::{async_block_env#0}>, alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (ptr=...)
    at /home/cihat/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.36.0/src/runtime/task/raw.rs:271
#41 0x00007ffff5ac1107 in tokio::runtime::task::raw::RawTask::poll (self=...)
    at src/runtime/task/raw.rs:201
#42 0x00007ffff5b2e5f2 in tokio::runtime::task::LocalNotified<alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>>::run<alloc::sync::Arc<tokio::runtime::scheduler::multi_thread::handle::Handle, alloc::alloc::Global>> (self=...) at src/runtime/task/mod.rs:416
#43 0x00007ffff5b12c8d in tokio::runtime::scheduler::multi_thread::worker::{impl--Type <RET> for more, q to quit, c to continue without paging--
#1}::run_task::{closure#0} ()
    at src/runtime/scheduler/multi_thread/worker.rs:576
#44 0x00007ffff5b12ad4 in tokio::runtime::coop::with_budget<core::result::Result<alloc::boxed::Box<tokio::runtime::scheduler::multi_thread::worker::Core, alloc::alloc::Global>, ()>, tokio::runtime::scheduler::multi_thread::worker::{impl#1}::run_task::{closure_env#0}> (budget=..., f=...) at src/runtime/coop.rs:107
#45 tokio::runtime::coop::budget<core::result::Result<alloc::boxed::Box<tokio::runtime::scheduler::multi_thread::worker::Core, alloc::alloc::Global>, ()>, tokio::runtime::scheduler::multi_thread::worker::{impl#1}::run_task::{closure_env#0}> (f=...) at src/runtime/coop.rs:73
#46 tokio::runtime::scheduler::multi_thread::worker::Context::run_task (
    self=0x7fffe77f90f8, task=..., core=0x7fffec010d80)
    at src/runtime/scheduler/multi_thread/worker.rs:575
#47 0x00007ffff5b12175 in tokio::runtime::scheduler::multi_thread::worker::Context::run (self=0x7fffe77f90f8, core=0x7fffec010d80)
    at src/runtime/scheduler/multi_thread/worker.rs:526
#48 0x00007ffff5b11db9 in tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure#0} () at src/runtime/scheduler/multi_thread/worker.rs:491
#49 0x00007ffff5afe390 in tokio::runtime::context::scoped::Scoped<tokio::runtime::scheduler::Context>::set<tokio::runtime::scheduler::Context, tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}, ()> (
    self=0x7fffc8000ca8, t=0x7fffe77f90f0, f=...)--Type <RET> for more, q to quit, c to continue without paging--
#50 0x00007ffff5b077cb in tokio::runtime::context::set_scheduler::{closure#0}<(), tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}> (c=0x7fffc8000c70) at src/runtime/context.rs:176
#51 0x00007ffff5b0abc2 in std::thread::local::LocalKey<tokio::runtime::context::Context>::try_with<tokio::runtime::context::Context, tokio::runtime::context::set_scheduler::{closure_env#0}<(), tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}>, ()> (self=0x7ffff6166cc0, f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/thread/local.rs:270
#52 0x00007ffff5b0891b in std::thread::local::LocalKey<tokio::runtime::context::Context>::with<tokio::runtime::context::Context, tokio::runtime::context::set_scheduler::{closure_env#0}<(), tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}>, ()> (self=0x7ffff6166cc0, 
    f=<error reading variable: Cannot access memory at address 0xdf5f>)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/thread/local.rs:246
#53 0x00007ffff5b07704 in tokio::runtime::context::set_scheduler<(), tokio::runtime::scheduler::multi_thread::worker::run::{closure#0}::{closure_env#0}> (
    v=0x7fffe77f90f0, f=...) at src/runtime/context.rs:176
#54 0x00007ffff5b11cc1 in tokio::runtime::scheduler::multi_thread::worker::run::{closure#0} () at src/runtime/scheduler/multi_thread/worker.rs:486
#55 0x00007ffff5abbd58 in tokio::runtime::context::runtime::enter_runtime<tokio::runtime::scheduler::multi_thread::worker::run::{closure_env#0}, ()> (
--Type <RET> for more, q to quit, c to continue without paging--
    handle=0x7fffe77f92f8, allow_block_in_place=true, f=...)
    at src/runtime/context/runtime.rs:65
#56 0x00007ffff5b11a4c in tokio::runtime::scheduler::multi_thread::worker::run
    (worker=...) at src/runtime/scheduler/multi_thread/worker.rs:478
#57 0x00007ffff5b118bb in tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure#0} () at src/runtime/scheduler/multi_thread/worker.rs:447
#58 0x00007ffff5acc97e in tokio::runtime::blocking::task::{impl#2}::poll<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}, ()> (self=..., _cx=0x7fffe77f9480) at src/runtime/blocking/task.rs:42
#59 0x00007ffff5ac9f0c in tokio::runtime::task::core::{impl#6}::poll::{closure#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (ptr=0x7fffec1367a8) at src/runtime/task/core.rs:328
#60 0x00007ffff5ac9dbf in tokio::loom::std::unsafe_cell::UnsafeCell<tokio::runtime::task::core::Stage<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>>>::with_mut<tokio::runtime::task::core::Stage<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>>, core::task::poll::Poll<()>, tokio::runtime::task::core::{impl#6}::poll::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>> (self=0x7fffec1367a8, f=...)
    at src/loom/std/unsafe_cell.rs:16
--Type <RET> for more, q to quit, c to continue without paging--
#61 tokio::runtime::task::core::Core<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>::poll<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (self=0x7fffec1367a0, cx=...) at src/runtime/task/core.rs:317
#62 0x00007ffff5ab9895 in tokio::runtime::task::harness::poll_future::{closure#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> () at src/runtime/task/harness.rs:485
#63 0x00007ffff5affc34 in core::panic::unwind_safe::{impl#23}::call_once<core::task::poll::Poll<()>, tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>> (self=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/core/src/panic/unwind_safe.rs:272
#64 0x00007ffff5b37a46 in std::panicking::try::do_call<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>>, core::task::poll::Poll<()>> (data=0x7fffe77f95f8)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panick--Type <RET> for more, q to quit, c to continue without paging--
ing.rs:552
#65 0x00007ffff5b38d3b in __rust_try ()
   from /home/cihat/ws_moveit/install/zenoh_c_vendor/opt/zenoh_c_vendor/lib/libzenohcd.so
#66 0x00007ffff5b375a8 in std::panicking::try<core::task::poll::Poll<()>, core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>>> (f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panicking.rs:516
#67 0x00007ffff5b16d2b in std::panic::catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<tokio::runtime::task::harness::poll_future::{closure_env#0}<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>>, core::task::poll::Poll<()>> (f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panic.rs:142
#68 0x00007ffff5ab8dcf in tokio::runtime::task::harness::poll_future<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (core=0x7fffec1367a0, cx=...) at src/runtime/task/harness.rs:473
#69 0x00007ffff5ab70e9 in tokio::runtime::task::harness::Harness<tokio::runtime:--Type <RET> for more, q to quit, c to continue without paging--
:blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>::poll_inner<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (self=0x7fffe77f9810)
    at src/runtime/task/harness.rs:208
#70 0x00007ffff5ab6bd7 in tokio::runtime::task::harness::Harness<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule>::poll<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (self=...)
    at src/runtime/task/harness.rs:153
#71 0x00007ffff5ac138d in tokio::runtime::task::raw::poll<tokio::runtime::blocking::task::BlockingTask<tokio::runtime::scheduler::multi_thread::worker::{impl#0}::launch::{closure_env#0}>, tokio::runtime::blocking::schedule::BlockingSchedule> (ptr=...) at src/runtime/task/raw.rs:271
#72 0x00007ffff5ac1107 in tokio::runtime::task::raw::RawTask::poll (self=...)
    at src/runtime/task/raw.rs:201
#73 0x00007ffff5b2e6b7 in tokio::runtime::task::UnownedTask<tokio::runtime::blocking::schedule::BlockingSchedule>::run<tokio::runtime::blocking::schedule::BlockingSchedule> (self=...) at src/runtime/task/mod.rs:453
#74 0x00007ffff5ad2bf7 in tokio::runtime::blocking::pool::Task::run (self=...)
--Type <RET> for more, q to quit, c to continue without paging--
    at src/runtime/blocking/pool.rs:159#75 0x00007ffff5ad5d5b in tokio::runtime::blocking::pool::Inner::run (
    self=0x7fffec007020, worker_thread_id=0)
    at src/runtime/blocking/pool.rs:513
#76 0x00007ffff5ad5a74 in tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure#0} () at src/runtime/blocking/pool.rs:471
#77 0x00007ffff5acd066 in std::sys_common::backtrace::__rust_begin_short_backtrace<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>
    (f=<error reading variable: Cannot access memory at address 0xdf47>)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/sys_common/backtrace.rs:154
#78 0x00007ffff5ad8182 in std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure#0}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()> ()
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/thread/mod.rs:529
#79 0x00007ffff5affad2 in core::panic::unwind_safe::{impl#23}::call_once<(), std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>> (self=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/core/src/panic/unwind_safe.rs:272
#80 0x00007ffff5b37e73 in std::panicking::try::do_call<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure--Type <RET> for more, q to quit, c to continue without paging--
_env#0}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>>, ()> (data=0x7fffe77f9ca0)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panicking.rs:552
#81 0x00007ffff5b38d3b in __rust_try ()
   from /home/cihat/ws_moveit/install/zenoh_c_vendor/opt/zenoh_c_vendor/lib/libzenohcd.so
#82 0x00007ffff5b36c91 in std::panicking::try<(), core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>>> (f=...)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panicking.rs:516
#83 0x00007ffff5ad7f8f in std::panic::catch_unwind<core::panic::unwind_safe::AssertUnwindSafe<std::thread::{impl#0}::spawn_unchecked_::{closure#1}::{closure_env#0}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>>, ()> (f=<error reading variable: Cannot access memory at address 0x0>)
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/panic.rs:142
#84 std::thread::{impl#0}::spawn_unchecked_::{closure#1}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()> ()
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/std/src/thread/mod.rs:528
--Type <RET> for more, q to quit, c to continue without paging--
#85 0x00007ffff5b16f8f in core::ops::function::FnOnce::call_once<std::thread::{impl#0}::spawn_unchecked_::{closure_env#1}<tokio::runtime::blocking::pool::{impl#6}::spawn_thread::{closure_env#0}, ()>, ()> ()
    at /build/rustc-kAv1jW/rustc-1.75.0+dfsg0ubuntu1~bpo0/library/core/src/ops/function.rs:250
#86 0x00007ffff5b66cc5 in alloc::boxed::{impl#47}::call_once<(), dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global> (self=..., 
    args=<optimized out>) at library/alloc/src/boxed.rs:2007
#87 alloc::boxed::{impl#47}::call_once<(), alloc::boxed::Box<dyn core::ops::function::FnOnce<(), Output=()>, alloc::alloc::Global>, alloc::alloc::Global> (
    self=0x7fffec0117e0, args=<optimized out>)
    at library/alloc/src/boxed.rs:2007
#88 std::sys::unix::thread::{impl#2}::new::thread_start (main=0x7fffec0117e0)
    at library/std/src/sys/unix/thread.rs:108
#89 0x00007ffff7494ac3 in start_thread (arg=<optimized out>)
    at ./nptl/pthread_create.c:442
#90 0x00007ffff7526850 in clone3 ()
    at ../sysdeps/unix/sysv/linux/x86_64/clone3.S:81
MichaelOrlov commented 3 weeks ago

Discussion from maintenance triage: Decided to put in the backlog. cc: @clalancette