oneapi-src / oneTBB

oneAPI Threading Building Blocks (oneTBB)
https://oneapi-src.github.io/oneTBB/
Apache License 2.0
5.55k stars 1.01k forks source link

Debug assertion fails during static linking of onetbb 2021.6.0, but does not on 2021.3.0 #920

Closed cguentherTUChemnitz closed 1 month ago

cguentherTUChemnitz commented 1 year ago

Hi there,

i am trying to get an onetbb conan package update through. The ci-testing system ensures working different configurations and environments. As far as i can the the problem is not restricted to specific compiler versions and i was able to reproduce this assert fail on different gcc and clang versions. https://github.com/conan-io/conan-center-index/pull/13116

As far as i can see the problematic behavior gets triggered by the Debug mode (assuming that Release just disables the assertions and potentially hides the problem) in combination the shared=False option going for an static linkage.

maybe related to, since this is only triggered in shared=False and build_type=Debug sitatuion: https://github.com/oneapi-src/oneTBB/issues/297

The specific output as gathered from conan ci is: https://github.com/conan-io/conan-center-index/pull/13116#issuecomment-1259306027 The interesting part is:

Assertion node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val) failed (located in the push_front function, line in file: 135)
Detailed description: Object with intrusive list node can be part of only one intrusive list simultaneously

Does someone lighten me up here how to cope with such static linkage Debug asserts problems?Is this indicating a bigger issue? Is the old version just working because of missing assert statements, or is there in the meantime really something problematic introduced?

steps to reproduce:

git clone --branch onetbb/2021.6.0 https://github.com/cguentherTUChemnitz/conan-center-index.git 
cd conan-center-index/recipes/onetbb/all
docker run --rm -it -v$(pwd):/home/conan/project conanio/gcc11 /bin/bash -c "cd project && conan create . onetbb/2021.6.0@ -pr:b=default -pr:h=default -o onetbb:shared=False -s build_type=Debug"

or when conan is present locally, you can also leave the docker wrapper out with (from the onetbb/all working dir):

conan create . onetbb/2021.6.0@ -pr:b=default -pr:h=default -o onetbb:shared=False -s build_type=Debug
kambala-decapitator commented 1 year ago

happens when building without Conan as well. Info from lldb:

Assertion node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val) failed (located in the push_front function, line in file: 135)
Detailed description: Object with intrusive list node can be part of only one intrusive list simultaneously
Process 3552 stopped
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
    frame #0: 0x00007ff804a3500e libsystem_kernel.dylib`__pthread_kill + 10
libsystem_kernel.dylib`__pthread_kill:
->  0x7ff804a3500e <+10>: jae    0x7ff804a35018            ; <+20>
    0x7ff804a35010 <+12>: movq   %rax, %rdi
    0x7ff804a35013 <+15>: jmp    0x7ff804a2f1c5            ; cerror_nocancel
    0x7ff804a35018 <+20>: retq   
Target 0: (test_package) stopped.
(lldb) bt
* thread #1, queue = 'com.apple.main-thread', stop reason = signal SIGABRT
  * frame #0: 0x00007ff804a3500e libsystem_kernel.dylib`__pthread_kill + 10
    frame #1: 0x00007ff804a6b1ff libsystem_pthread.dylib`pthread_kill + 263
    frame #2: 0x00007ff8049b6d24 libsystem_c.dylib`abort + 123
    frame #3: 0x0000000100034156 test_package`tbb::detail::r1::assertion_failure_impl(location="push_front", line=135, expression="node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val)", comment="Object with intrusive list node can be part of only one intrusive list simultaneously") at assert_impl.h:56:9
    frame #4: 0x000000010003409f test_package`tbb::detail::r1::assertion_failure(this=0x00007ff7bfeff380)::$_0::operator()() const at assert_impl.h:73:27
    frame #5: 0x00000001000339f9 test_package`void tbb::detail::d0::run_initializer<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::$_0>(f=0x00007ff7bfeff380, state=0x0000000100084d38)::$_0 const&, std::__1::atomic<tbb::detail::d0::do_once_state>&) at _utils.h:288:5
    frame #6: 0x0000000100033563 test_package`void tbb::detail::d0::atomic_do_once<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::$_0>(initializer=0x00007ff7bfeff380, state=0x0000000100084d38)::$_0 const&, std::__1::atomic<tbb::detail::d0::do_once_state>&) at _utils.h:277:17
    frame #7: 0x00000001000334e7 test_package`tbb::detail::r1::assertion_failure(location="push_front", line=135, expression="node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val)", comment="Object with intrusive list node can be part of only one intrusive list simultaneously") at assert_impl.h:73:5
    frame #8: 0x000000010002c3cf test_package`tbb::detail::r1::intrusive_list_base<tbb::detail::r1::intrusive_list<tbb::detail::r1::arena>, tbb::detail::r1::arena>::push_front(this=0x0000000100808888, val=0x0000000100811500) at intrusive_list.h:134:9
    frame #9: 0x000000010002c2cc test_package`tbb::detail::r1::market::insert_arena_into_list(this=0x0000000100808800, a=0x0000000100811500) at market.cpp:47:36
    frame #10: 0x000000010002ddf1 test_package`tbb::detail::r1::market::create_arena(num_slots=6, num_reserved_slots=1, arena_priority_level=1, stack_size=0) at market.cpp:298:7
    frame #11: 0x000000010001d6cd test_package`tbb::detail::r1::governor::init_external_thread() at governor.cpp:188:17
    frame #12: 0x000000010003d295 test_package`tbb::detail::r1::governor::get_thread_data() at governor.h:103:9
    frame #13: 0x000000010003d235 test_package`tbb::detail::r1::allocate(allocator=0x00007ff7bfeff5e8, number_of_bytes=128) at small_object_pool.cpp:41:16
    frame #14: 0x000000010000451a test_package`tbb::detail::d1::start_reduce<tbb::detail::d1::blocked_range<int>, tbb::detail::d1::lambda_reduce_body<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>, tbb::detail::d1::auto_partitioner const>* tbb::detail::d1::small_object_allocator::new_object<tbb::detail::d1::start_reduce<tbb::detail::d1::blocked_range<int>, tbb::detail::d1::lambda_reduce_body<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>, tbb::detail::d1::auto_partitioner const>, tbb::detail::d1::blocked_range<int> const&, tbb::detail::d1::lambda_reduce_body<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>&, tbb::detail::d1::auto_partitioner const&, tbb::detail::d1::small_object_allocator&>(this=0x00007ff7bfeff5e8, args=0x00007ff7bfeff788, args=0x00007ff7bfeff720, args=0x00007ff7bfeff718, args=0x00007ff7bfeff5e8) at _small_object_pool.h:61:34
    frame #15: 0x00000001000042af test_package`tbb::detail::d1::start_reduce<tbb::detail::d1::blocked_range<int>, tbb::detail::d1::lambda_reduce_body<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>, tbb::detail::d1::auto_partitioner const>::run(range=0x00007ff7bfeff788, body=0x00007ff7bfeff720, partitioner=0x00007ff7bfeff718, context=0x00007ff7bfeff678) at parallel_reduce.h:137:38
    frame #16: 0x0000000100004127 test_package`tbb::detail::d1::start_reduce<tbb::detail::d1::blocked_range<int>, tbb::detail::d1::lambda_reduce_body<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>, tbb::detail::d1::auto_partitioner const>::run(range=0x00007ff7bfeff788, body=0x00007ff7bfeff720, partitioner=0x00007ff7bfeff718) at parallel_reduce.h:146:9
    frame #17: 0x0000000100003fdf test_package`int tbb::detail::d1::parallel_reduce<tbb::detail::d1::blocked_range<int>, int, main::$_0, main::$_1>(range=0x00007ff7bfeff788, identity=0x00007ff7bfeff784, real_body=0x00007ff7bfeff780, reduction=0x00007ff7bfeff778) at parallel_reduce.h:507:5
    frame #18: 0x0000000100003f63 test_package`main at test_package.cpp:8:15
    frame #19: 0x000000010016152e dyld`start + 462
cguentherTUChemnitz commented 1 year ago

@kambala-decapitator how were you able to trigger it? When i compile onetbb from source in debug and static mode, i am able to run ctest with all tests without an error. So i am very curious, why this happens like in every setup of the conan tests instead. I am at the moment not able to trigger the problem here on onetbb direct compilation without conan. Can you provide steps to reproduce?

kambala-decapitator commented 1 year ago

I compiled tbb manually and then used it when building Conan's example in raw cmake invocation.

btw the above stack trace is from the official Getting Started example.

cguentherTUChemnitz commented 1 year ago

Ok in my eyes is then the interesting part here, why this fail is not part of just running cmake build and ctest in debug and static setup. So either we have discovered a problem, that should als be part of the onetbb test setup, or we do still some unintended usage with our cmake test_package project with or without conan.

kambala-decapitator commented 1 year ago

here's the Getting Started example as CMake project for testing: https://github.com/kambala-decapitator/onetbb-assert-debug-static

dirkvdb commented 1 year ago

Same issue here using a static build. It immediately asserts on startup when registering a task observer:

libc.so.6!__GI_raise(int sig) (raise.c:50)
libc.so.6!__GI_abort() (abort.c:79)
tbb::detail::r1::assertion_failure_impl(const char * location, int line, const char * expression, const char * comment) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\assert_impl.h:56)
operator()(const struct {...} * const __closure) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\assert_impl.h:73)
tbb::detail::d0::run_initializer<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::<lambda()> >(const struct {...} &, std::atomic<tbb::detail::d0::do_once_state> &)(const struct {...} & f, std::atomic<tbb::detail::d0::do_once_state> & state) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\include\oneapi\tbb\detail\_utils.h:288)
tbb::detail::d0::atomic_do_once<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::<lambda()> >(const struct {...} &, std::atomic<tbb::detail::d0::do_once_state> &)(const struct {...} & initializer, std::atomic<tbb::detail::d0::do_once_state> & state) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\include\oneapi\tbb\detail\_utils.h:277)
tbb::detail::r1::assertion_failure(const char * location, int line, const char * expression, const char * comment) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\assert_impl.h:73)
tbb::detail::r1::intrusive_list_base<tbb::detail::r1::intrusive_list<tbb::detail::r1::arena>, tbb::detail::r1::arena>::push_front(tbb::detail::r1::intrusive_list_base<tbb::detail::r1::intrusive_list<tbb::detail::r1::arena>, tbb::detail::r1::arena> * const this, tbb::detail::r1::arena & val) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\intrusive_list.h:134)
tbb::detail::r1::market::insert_arena_into_list(tbb::detail::r1::market * const this, tbb::detail::r1::arena & a) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\market.cpp:47)
tbb::detail::r1::market::create_arena(int num_slots, int num_reserved_slots, unsigned int arena_priority_level, std::size_t stack_size) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\market.cpp:303)
tbb::detail::r1::governor::init_external_thread() (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\governor.cpp:188)
tbb::detail::r1::governor::get_thread_data() (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\governor.h:103)
tbb::detail::r1::observe(tbb::detail::d1::task_scheduler_observer & tso, bool enable) (\work\weiss\deps\vcpkg\buildtrees\tbb\src\v2021.7.0-3b3fca2d98.clean\src\tbb\observer_proxy.cpp:271)
tbb::detail::d1::task_scheduler_observer::observe(tbb::detail::d1::task_scheduler_observer * const this, bool state) (\work\weiss\vcpkgs\x64-linux\include\oneapi\tbb\task_scheduler_observer.h:103)
weiss::TaskObserver::TaskObserver(weiss::TaskObserver * const this) (\work\weiss\weisscore\jobrunner.cpp:15)
main(int argc, char ** argv) (\work\weiss\weisscore\test\gtestmain.cpp:67)
IlgarLunin commented 1 year ago

Same issue

pavelkumbrasev commented 1 year ago

Hi @cguentherTUChemnitz, can you observe the same problem with latest release or manual build of master branch? In general we don't recommend use static version of the library because it might leads to such unexpected failures.

dirkvdb commented 1 year ago

This still occurs in release v2021.9.0

pavelkumbrasev commented 1 year ago

@dirkvdb Could you please check it with current master?

dirkvdb commented 1 year ago

@dirkvdb Could you please check it with current master?

Current master also still has the issue

pavelkumbrasev commented 1 year ago

It reports the same stack? (Should be different)

dirkvdb commented 1 year ago

The stack is indeed different, it no longer asserts immediately at startup, it asserts when the parallel_for_each gets executed.

#4  0x0000555555d7270a in tbb::detail::r1::assertion_failure_impl (location=0x555557e4d96d "push_front", line=134,
    expression=0x555557e4d920 "node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val)",
    comment=0x555557e4d8c8 "Object with intrusive list node can be part of only one intrusive list simultaneously") at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/assert_impl.h:56
#5  0x0000555555d72748 in operator() (__closure=0x7fffffffc420) at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/assert_impl.h:73
#6  0x0000555555d72c23 in tbb::detail::d0::run_initializer<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::<lambda()> >(const struct {...} &, std::atomic<tbb::detail::d0::do_once_state> &) (f=..., state=std::atomic<tbb::detail::d0::do_once_state> = { tbb::detail::d0::do_once_state::pending })
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/../../include/oneapi/tbb/detail/_utils.h:295
#7  0x0000555555d72bb9 in tbb::detail::d0::atomic_do_once<tbb::detail::r1::assertion_failure(char const*, int, char const*, char const*)::<lambda()> >(const struct {...} &, std::atomic<tbb::detail::d0::do_once_state> &) (initializer=..., state=std::atomic<tbb::detail::d0::do_once_state> = { tbb::detail::d0::do_once_state::pending })
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/../../include/oneapi/tbb/detail/_utils.h:284
#8  0x0000555555d727a7 in tbb::detail::r1::assertion_failure (location=0x555557e4d96d "push_front", line=134,
    expression=0x555557e4d920 "node(val).my_prev_node == &node(val) && node(val).my_next_node == &node(val)",
    comment=0x555557e4d8c8 "Object with intrusive list node can be part of only one intrusive list simultaneously") at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/assert_impl.h:73
#9  0x0000555555d8550a in tbb::detail::r1::intrusive_list_base<tbb::detail::r1::intrusive_list<tbb::detail::r1::thread_dispatcher_client>, tbb::detail::r1::thread_dispatcher_client>::push_front (
    this=0x55555b1efea8, val=...) at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/intrusive_list.h:134
#10 0x0000555555d83e19 in tbb::detail::r1::thread_dispatcher::insert_client (this=0x55555b1efe80, client=...)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/thread_dispatcher.cpp:101
#11 0x0000555555d83b4d in tbb::detail::r1::thread_dispatcher::register_client (this=0x55555b1efe80, client=0x55555b1fa000)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/thread_dispatcher.cpp:59
#12 0x0000555555d8633d in tbb::detail::r1::threading_control_impl::publish_client (this=0x55555b1efc80, tc_client=...)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/threading_control.cpp:130
#13 0x0000555555d87037 in tbb::detail::r1::threading_control::publish_client (this=0x55555b1efb80, client=...)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/threading_control.cpp:310
#14 0x0000555555d535ec in tbb::detail::r1::arena::create (control=0x55555b1efb80, num_slots=24, num_reserved_slots=1, arena_priority_level=1)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/arena.cpp:447
#15 0x0000555555d65970 in tbb::detail::r1::governor::init_external_thread () at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/governor.cpp:195
#16 0x0000555555d557a3 in tbb::detail::r1::governor::get_thread_data () at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/governor.h:104
#17 0x0000555555d7c1af in tbb::detail::r1::task_dispatcher::execute_and_wait (t=0x7fffffffc780, wait_ctx=..., w_ctx=...)
    at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/task_dispatcher.cpp:155
#18 0x0000555555d7c09a in tbb::detail::r1::execute_and_wait (t=warning: RTTI symbol not found for class 'tbb::detail::d2::for_each_root_task<__gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, weiss::ComputeModel::runComputation(std::set<weiss::SubstanceId, std::less<weiss::SubstanceId>, std::allocator<weiss::SubstanceId> > const&, std::vector<std::shared_ptr<weiss::WEISSNode const>, std::allocator<std::shared_ptr<weiss::WEISSNode const> > > const&, std::optional<int>)::{lambda(weiss::EevId)#2}, weiss::EevId, std::random_access_iterator_tag>'
..., t_ctx=..., wait_ctx=..., w_ctx=...) at /work/weiss/deps/vcpkg/buildtrees/tbb/src/master-2a62fe474b.clean/src/tbb/task_dispatcher.cpp:121
#19 0x0000555555873cae in tbb::detail::d1::execute_and_wait (t=warning: RTTI symbol not found for class 'tbb::detail::d2::for_each_root_task<__gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, weiss::ComputeModel::runComputation(std::set<weiss::SubstanceId, std::less<weiss::SubstanceId>, std::allocator<weiss::SubstanceId> > const&, std::vector<std::shared_ptr<weiss::WEISSNode const>, std::allocator<std::shared_ptr<weiss::WEISSNode const> > > const&, std::optional<int>)::{lambda(weiss::EevId)#2}, weiss::EevId, std::random_access_iterator_tag>'
..., t_ctx=..., wait_ctx=..., w_ctx=...) at /work/weiss/vcpkgs/x64-linux/include/oneapi/tbb/detail/_task.h:191
#20 0x0000555555935445 in tbb::detail::d2::run_parallel_for_each<__gnu_cxx::__normal_iterator<const weiss::EevId*, std::vector<weiss::EevId> >, weiss::ComputeModel::runComputation(const std::set<weiss::SubstanceId>&, const std::vector<std::shared_ptr<const weiss::WEISSNode> >&, std::optional<int>)::<lambda(weiss::EevId)> >(__gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, __gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, const struct {...} &, tbb::detail::d1::task_group_context &) (first=..., last=...,
    body=..., context=...) at /work/weiss/vcpkgs/x64-linux/include/oneapi/tbb/parallel_for_each.h:602
#21 0x0000555555935092 in tbb::detail::d2::parallel_for_each<__gnu_cxx::__normal_iterator<const weiss::EevId*, std::vector<weiss::EevId> >, weiss::ComputeModel::runComputation(const std::set<weiss::SubstanceId>&, const std::vector<std::shared_ptr<const weiss::WEISSNode> >&, std::optional<int>)::<lambda(weiss::EevId)> >(__gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, __gnu_cxx::__normal_iterator<weiss::EevId const*, std::vector<weiss::EevId, std::allocator<weiss::EevId> > >, const struct {...} &) (first=..., last=..., body=...)
    at /work/weiss/vcpkgs/x64-linux/include/oneapi/tbb/parallel_for_each.h:634
#22 0x0000555555933753 in weiss::ComputeModel::runComputation (this=0x55555b14a030, substances=std::set with 1 element = {...}, sources=std::vector of length 1, capacity 1 = {...},
    concurrency=std::optional<int> [no contained value]) at /work/weiss/weisscore/computemodel.cpp:153
#23 0x00005555559327d2 in weiss::ComputeModel::runComputation (this=0x55555b14a030, sub=..., source=std::shared_ptr<const weiss::WEISSNode> (use count 6, weak count 1) = {...},
    concurrency=std::optional<int> [no contained value]) at /work/weiss/weisscore/computemodel.cpp:92
#24 0x00005555559326f6 in weiss::ComputeModel::runComputation (this=0x55555b14a030, sub=..., source=std::shared_ptr<const weiss::WEISSNode> (use count 6, weak count 1) = {...})
    at /work/weiss/weisscore/computemodel.cpp:85
#25 0x00005555557c13c2 in weiss::test::ExportEmissionsTest::ExportEmissionsTest (this=0x55555ae39eb0) at /work/weiss/weisscore/test/exportemissionstest.cpp:67
#26 0x00005555557c8dd0 in weiss::test::ExportEmissionsTest_exportEmissionTableCorrectHeaderAndStringBasedValues_Test::ExportEmissionsTest_exportEmissionTableCorrectHeaderAndStringBasedValues_Test (
    this=0x55555ae39eb0) at /work/weiss/weisscore/test/exportemissionstest.cpp:83
#27 0x00005555557c8e12 in testing::internal::TestFactoryImpl<weiss::test::ExportEmissionsTest_exportEmissionTableCorrectHeaderAndStringBasedValues_Test>::CreateTest (this=0x55555adb2490)
    at /work/weiss/vcpkgs/x64-linux/include/gtest/internal/gtest-internal.h:472
pavelkumbrasev commented 1 year ago

How is this even possible "Object with intrusive list node can be part of only one intrusive list simultaneously"? Are there multiple places where oneTBB is used? Do you think you could write a small reproducer for this problem?

dirkvdb commented 1 year ago

One parallel for loop is started. I will try to make a simple reproducer.

dirkvdb commented 1 year ago

I had difficulties reproducing the behavior in a simple test program until I started playing with the linker options. We use cmake and our test executables rely on transitive linking options to link in the TBB::tbb target. After adding an explicit linker dependency on the TBB::tbb target for the test executable the asserts are gone.

pavelkumbrasev commented 1 year ago

That's really strange! Do you think we can use this approach as a solution for this issue?

nofuturre commented 1 month ago

@cguentherTUChemnitz is this issue still relevant?

nofuturre commented 1 month ago

If anyone encounter this issue in the future please open new issue with a link to this one