Experiencing problems? Have you tried our Stack Exchange first?
[X] This is not a support question.
Description of bug
We underwent a Ddos attack today and the polkadot program returned this error.
It is reported as a bug but there were external factors that caused it.
All previous logs were the following:
Protocol controllers receiver stream has returned None. Ignore this error if the node is shutting down. [31/1198]
Protocol command streams have been shut down
The database is not corrupted and the hardware works fine.
Steps to reproduce
Unfortunately we cannot replicate this easily as it required a Ddos attack on our networks to cause this error.
`
2024-02-24 15:04:20 Protocol controllers receiver stream has returned None. Ignore this error if the node is shutting down. [31/1198]
2024-02-24 15:04:20 Protocol command streams have been shut down
2024-02-24 15:04:20 Protocol controllers receiver stream has returned None. Ignore this error if the node is shutting down.
2024-02-24 15:04:20 Protocol command streams have been shut down
====================
Version: 1.7.1-70e569d5112
0: sp_panic_handler::set::{{closure}}
1: <alloc::boxed::Box<F,A> as core::ops::function::Fn>::call
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2029:9
std::panicking::rust_panic_with_hook
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:783:13
2: std::panicking::begin_panic_handler::{{closure}}
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:657:13
3: std::sys_common::backtrace::__rust_end_short_backtrace
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:171:18
4: rust_begin_unwind
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:645:5
5: core::panicking::panic_fmt
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:72:14
6: tokio::runtime::time::entry::TimerEntry::poll_elapsed::panic_cold_display
7: ::poll
8: tokio::time::interval::Interval::poll_tick 9: libp2p_mdns::behaviour::timer::tokio::<impl futures_core::stream::Stream for libp2p_mdns::behaviour::timer::Timer<tokio::time::interval::Interva
l>>::poll_next
10: libp2p_mdns::behaviour::iface::InterfaceState<U,T>::poll
11: <libp2p_mdns::behaviour::Behaviour
as libp2p_swarm::behaviour::NetworkBehaviour>::poll
12: ::poll
13: <sc_network::behaviour::Behaviour as libp2p_swarm::behaviour::NetworkBehaviour>::poll
14: libp2p_swarm::Swarm::poll_next_event
15: sc_network::service::NetworkWorker<B,H>::next_action::{{closure}}::{{closure}}::{{closure}}
16: <futures_util::future::poll_fn::PollFn as core::future::future::Future>::poll
17: <futures_util::future::future::fuse::Fuse as core::future::future::Future>::poll
18: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}}
19: <futures_util::future::poll_fn::PollFn as core::future::future::Future>::poll
20: <core::panic::unwind_safe::AssertUnwindSafe as core::future::future::Future>::poll
21: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
22: <tracing_futures::Instrumented as core::future::future::Future>::poll
23: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
24: std::panicking::try
25: tokio::runtime::task::harness::Harness<T,S>::poll
26: std::sys_common::backtrace::__rust_begin_short_backtrace
27: core::ops::function::FnOnce::call_once{{vtable.shim}}
28: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
std::sys::unix::thread::Thread::new::thread_start
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys/unix/thread.rs:108:17
29:
30:
Thread 'tokio-runtime-worker' panicked at 'A Tokio 1.x context was found, but it is being shutdown.', /home/monkeydog/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/time/entry.rs:557
Is there an existing issue?
Experiencing problems? Have you tried our Stack Exchange first?
Description of bug
We underwent a Ddos attack today and the polkadot program returned this error.
It is reported as a bug but there were external factors that caused it.
All previous logs were the following:
Protocol controllers receiver stream has returned None. Ignore this error if the node is shutting down. [31/1198] Protocol command streams have been shut down
The database is not corrupted and the hardware works fine.
Steps to reproduce
Unfortunately we cannot replicate this easily as it required a Ddos attack on our networks to cause this error.
`
2024-02-24 15:04:20 Protocol controllers receiver stream has returned
None
. Ignore this error if the node is shutting down. [31/1198] 2024-02-24 15:04:20 Protocol command streams have been shut down2024-02-24 15:04:20 Protocol controllers receiver stream has returned
None
. Ignore this error if the node is shutting down.2024-02-24 15:04:20 Protocol command streams have been shut down
====================
Version: 1.7.1-70e569d5112
0: sp_panic_handler::set::{{closure}}>::call::poll
1: <alloc::boxed::Box<F,A> as core::ops::function::Fn
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2029:9
std::panicking::rust_panic_with_hook
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:783:13
2: std::panicking::begin_panic_handler::{{closure}}
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:657:13 3: std::sys_common::backtrace::__rust_end_short_backtrace
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys_common/backtrace.rs:171:18 4: rust_begin_unwind
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/panicking.rs:645:5
5: core::panicking::panic_fmt
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/core/src/panicking.rs:72:14 6: tokio::runtime::time::entry::TimerEntry::poll_elapsed::panic_cold_display
7:
8: tokio::time::interval::Interval::poll_tick 9: libp2p_mdns::behaviour::timer::tokio::<impl futures_core::stream::Stream for libp2p_mdns::behaviour::timer::Timer<tokio::time::interval::Interva l>>::poll_next 10: libp2p_mdns::behaviour::iface::InterfaceState<U,T>::poll
11: <libp2p_mdns::behaviour::Behaviour
as libp2p_swarm::behaviour::NetworkBehaviour>::poll 12:::poll
13: <sc_network::behaviour::Behaviour as libp2p_swarm::behaviour::NetworkBehaviour>::poll
14: libp2p_swarm::Swarm::poll_next_event
15: sc_network::service::NetworkWorker<B,H>::next_action::{{closure}}::{{closure}}::{{closure}}
16: <futures_util::future::poll_fn::PollFn as core::future::future::Future>::poll
17: <futures_util::future::future::fuse::Fuse as core::future::future::Future>::poll
18: sc_service::build_network_future::{{closure}}::{{closure}}::{{closure}}
19: <futures_util::future::poll_fn::PollFn as core::future::future::Future>::poll
20: <core::panic::unwind_safe::AssertUnwindSafe as core::future::future::Future>::poll
21: <futures_util::future::select::Select<A,B> as core::future::future::Future>::poll
22: <tracing_futures::Instrumented as core::future::future::Future>::poll
23: tokio::runtime::context::blocking::BlockingRegionGuard::block_on
24: std::panicking::try
25: tokio::runtime::task::harness::Harness<T,S>::poll
26: std::sys_common::backtrace::__rust_begin_short_backtrace
27: core::ops::function::FnOnce::call_once{{vtable.shim}}
28: <alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
<alloc::boxed::Box<F,A> as core::ops::function::FnOnce>::call_once
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/alloc/src/boxed.rs:2015:9
std::sys::unix::thread::Thread::new::thread_start
at rustc/07dca489ac2d933c78d3c5158e3f43beefeb02ce/library/std/src/sys/unix/thread.rs:108:17
29:
30:
Thread 'tokio-runtime-worker' panicked at 'A Tokio 1.x context was found, but it is being shutdown.', /home/monkeydog/.cargo/registry/src/index.crates.io-6f17d22bba15001f/tokio-1.33.0/src/runtime/time/entry.rs:557
This is a bug. Please report it at:
`