We've identified a number of weaknesses in the hydra design and implementation, which cause ungraceful failures (worker crashes) and downtimes when utilization spikes. The problem occurred in the window 7/7/2021-7/21/2021.
Problem analysis (theory)
The backend Postgres database can become overloaded under high volume of DHT requests to the hydras.
This causes query times to the database to increase. This in turn causes DHT requests to backup in the provider manager loop, which in turn causes the hydra nodes to crash.
Verify that a sustained increased request load at the hydra level does not propagate to the Postgres backing datastore. This should be ensured by measures for graceful degradation of quality (above) at the DHT provider manager.
@petar : thanks for putting this together. A few comments/questions coming to mind:
I'm not saying we need to backfill now, but in future I think it would be ideal to include the data that lead us to our theory.
Do we know why we're crashing now vs. not previously?
What's the impact to Hydra nodes crashing? Does the whole network see impact? Or is our ability to monitor/inspect the network impaired?
Is there anything else architecturally or infra wise we could do that would help here? I'm not saying we should, but for example, would AWS RDS Postgres Aurora help here?
You don't need to answer these questions here. They are the things that came to mind while reading this.
We've identified a number of weaknesses in the hydra design and implementation, which cause ungraceful failures (worker crashes) and downtimes when utilization spikes. The problem occurred in the window 7/7/2021-7/21/2021.
Problem analysis (theory)
The backend Postgres database can become overloaded under high volume of DHT requests to the hydras. This causes query times to the database to increase. This in turn causes DHT requests to backup in the provider manager loop, which in turn causes the hydra nodes to crash.
Corrective steps
Acceptance criteria