Workaround for the random repeatedly occuring mass eventstream disconnects:
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=lodestar-reth-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=nimbus-besu-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=nimbus-geth-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=lodestar-nethermind-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=lodestar-besu-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=nimbus-nethermind-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=nimbus-reth-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=lodestar-geth-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=lodestar-ethereumjs-1 service=cl-pool
time="2024-08-26T11:12:00Z" level=warning msg="beacon block stream error: stream error: stream ID 25; INTERNAL_ERROR; received from peer" client=nimbus-ethereumjs-1 service=cl-pool
This affects lodestar & nimbus clients only (previously nimbus only). All these connections seem to be killed at the exact same time (at full minutes like 11:12:00 above), and even from different explorer instances.
Workaround for the random repeatedly occuring mass eventstream disconnects:
This affects lodestar & nimbus clients only (previously nimbus only). All these connections seem to be killed at the exact same time (at full minutes like 11:12:00 above), and even from different explorer instances.
It seems to be a go bug related to http2: https://github.com/golang/go/issues/51323
This PR silently drops this specific error and reconnects the affected event stream immediately without the usual reconnect delay.