moonbeam-foundation / moonbeam

An Ethereum-compatible smart contract parachain on Polkadot
https://moonbeam.network
GNU General Public License v3.0
920 stars 331 forks source link

Sending a request too rapidly after node startup makes it crash #2917

Open rbauduin opened 2 weeks ago

rbauduin commented 2 weeks ago

I'm running my tests with a Makefile, which starts the local moonbeam node with docker if it is not yet running. However, sending a query (to deploy the contract under test) to the node that was just started regularly (i.e. not always) results in a crash with this log:

====================

Version: 0.39.1-dcb449dafaf

   0: sp_panic_handler::set::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::option::expect_failed
   7: <moonbeam_service::PendingConsensusDataProvider<RuntimeApi> as fc_rpc::eth::pending::ConsensusDataProvider<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::OpaqueExtrinsic>>>::create_digest
   8: fc_rpc::eth::pending::<impl fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC>>::pending_runtime_api::{{closure}}
   9: <fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC> as fc_rpc_core::eth::EthApiServer>::estimate_gas::{{closure}}
  10: jsonrpsee_core::server::rpc_module::RpcModule<Context>::register_async_method::{{closure}}::{{closure}}
  11: <sc_rpc_server::middleware::Middleware<S> as jsonrpsee_server::middleware::rpc::RpcServiceT>::call::{{closure}}
  12: jsonrpsee_server::server::handle_rpc_call::{{closure}}
  13: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
  14: <jsonrpsee_server::middleware::http::proxy_get_request::ProxyGetRequest<S> as tower_service::Service<http::request::Request<hyper::body::body::Body>>>::call::{{closure}}
  15: <tower::util::either::Either<A,B> as core::future::future::Future>::poll
  16: sc_rpc_server::start_server::{{closure}}::{{closure}}::{{closure}}::{{closure}}::{{closure}}
  17: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
  18: <hyper::server::server::new_svc::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
  19: tokio::runtime::task::raw::poll
  20: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  21: tokio::runtime::scheduler::multi_thread::worker::run
  22: tokio::runtime::task::raw::poll
  23: std::sys_common::backtrace::__rust_begin_short_backtrace
  24: core::ops::function::FnOnce::call_once{{vtable.shim}}
  25: std::sys::pal::unix::thread::Thread::new::thread_start
  26: <unknown>
  27: clone

Thread 'tokio-runtime-worker' panicked at 'Nimbus pre-runtime digest should be present', /moonbeam/node/service/src/lib.rs:1818

I had this happen for a query sent after the log message Development Service Ready , but before the first log of this kind:

 💤 Idle (0 peers), best: #0 (0xa2f3…54fa), finalized #0 (0xa2f3…54fa), ⬇ 0 ⬆ 0

My solution is to wait for the occurence of the Idle log message.

I'm happy to help reproduce it with our code if you want.

noandrea commented 2 weeks ago

thanks for reporting, we'll look into it

gullomagico commented 1 week ago

Hi, I have a similar error even if I wait for the "Idle" log. This happens when I try to deploy a contract with hardhat. I'll share logs and docker run command:

docker run -d -p 9944:9944 -p 30333:30333 -p 30334:30334 --name dev-A moonbeamfoundation/moonbeam:v0.39.1 --dev --rpc-external --discover-local --log libp2p=trace,debug --sealing instant

2024-09-05 09:07:54.886 DEBUG tokio-runtime-worker sync: Propagating transactions    
2024-09-05 09:07:56.701  INFO tokio-runtime-worker substrate: 💤 Idle (0 peers), best: #0 (0xa2f3…54fa), finalized #0 (0xa2f3…54fa), ⬇ 0 ⬆ 0    
2024-09-05 09:07:57.786 DEBUG tokio-runtime-worker sync: Propagating transactions    
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker wasm_overrides: Neither WASM override nor substitute available for block 0xa2f3…54fa, using onchain code    
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Local node trie cache dropped: empty
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Local value trie cache dropped: local hit rate = 0% [0/2], shared hit rate = 100% [2/2]
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Updated the shared node cache: 0 accesses, 0 new values, 0/1003 evicted (length = 331, inline size=406032/25985040, heap size=2077682/27702051)
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker trie-cache: Updated the shared value cache: 2 accesses, 0 new values, 0/821 evicted (length = 271, known_storage_keys = 271, inline size=61968/7929872, heap size=15404/5491901)
2024-09-05 09:07:58.752 DEBUG tokio-runtime-worker wasm_overrides: Neither WASM override nor substitute available for block 0xa2f3…54fa, using onchain code    
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker wasm-heap: allocator dropped: AllocationStats { bytes_allocated: 32, bytes_allocated_peak: 40, bytes_allocated_sum: 56, address_space_used: 56 }    
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Local node trie cache dropped: empty
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Local value trie cache dropped: local hit rate = 0% [0/3], shared hit rate = 100% [3/3]
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Updated the shared node cache: 0 accesses, 0 new values, 0/1003 evicted (length = 331, inline size=406032/25985040, heap size=2077682/27702051)
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker trie-cache: Updated the shared value cache: 3 accesses, 0 new values, 0/821 evicted (length = 271, known_storage_keys = 271, inline size=61968/7929872, heap size=15404/5491901)
2024-09-05 09:07:58.753 DEBUG tokio-runtime-worker rpc_metrics: [http] eth_chainId call took 509 μs    

====================

Version: 0.39.1-dcb449dafaf

   0: sp_panic_handler::set::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::option::expect_failed
   7: <moonbeam_service::PendingConsensusDataProvider<RuntimeApi> as fc_rpc::eth::pending::ConsensusDataProvider<sp_runtime::generic::block::Block<sp_runtime::generic::header::Header<u32,sp_runtime::traits::BlakeTwo256>,sp_runtime::OpaqueExtrinsic>>>::create_digest
   8: fc_rpc::eth::pending::<impl fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC>>::pending_runtime_api::{{closure}}
   9: <fc_rpc::eth::Eth<B,C,P,CT,BE,A,CIDP,EC> as fc_rpc_core::eth::EthApiServer>::estimate_gas::{{closure}}
  10: jsonrpsee_core::server::rpc_module::RpcModule<Context>::register_async_method::{{closure}}::{{closure}}
  11: <sc_rpc_server::middleware::Middleware<S> as jsonrpsee_server::middleware::rpc::RpcServiceT>::call::{{closure}}
  12: jsonrpsee_server::server::handle_rpc_call::{{closure}}
  13: <futures_util::future::future::Map<Fut,F> as core::future::future::Future>::poll
  14: <jsonrpsee_server::middleware::http::proxy_get_request::ProxyGetRequest<S> as tower_service::Service<http::request::Request<hyper::body::body::Body>>>::call::{{closure}}
  15: <tower::util::either::Either<A,B> as core::future::future::Future>::poll
  16: sc_rpc_server::start_server::{{closure}}::{{closure}}::{{closure}}::{{closure}}::{{closure}}
  17: hyper::proto::h1::dispatch::Dispatcher<D,Bs,I,T>::poll_catch
  18: <hyper::server::server::new_svc::NewSvcTask<I,N,S,E,W> as core::future::future::Future>::poll
  19: tokio::runtime::task::raw::poll
  20: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  21: tokio::runtime::scheduler::multi_thread::worker::run
  22: tokio::runtime::task::raw::poll
  23: std::sys_common::backtrace::__rust_begin_short_backtrace
  24: core::ops::function::FnOnce::call_once{{vtable.shim}}
  25: std::sys::pal::unix::thread::Thread::new::thread_start
  26: <unknown>
  27: <unknown>

Thread 'tokio-runtime-worker' panicked at 'Nimbus pre-runtime digest should be present', /moonbeam/node/service/src/lib.rs:1818

This is a bug. Please report it at:

        https://github.com/moonbeam-foundation/moonbeam/issues/new

2024-09-05 09:07:58.796 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100
2024-09-05 09:07:58.797 DEBUG tokio-runtime-worker rpc_metrics: [http] eth_blockNumber call took 50 μs    
2024-09-05 09:07:58.809 DEBUG tokio-runtime-worker jsonrpsee-server: Accepting new connection 1/100