dappnode / DAppNodePackage-nethermind

Nethermind Ethereum client
Other
8 stars 7 forks source link

Nethermind Pruning #123

Open CMTRACE opened 1 year ago

CMTRACE commented 1 year ago

Hey, looking to prune Nethermind or ideally set up online pruning to trigger when disk space is low. (at 250gb for example)

How can this be achieved? under config "Extra Opts" no argument i pass to nethermind through extra opts seems to work every time in the logs its logged as an invalid argument

https://docs.nethermind.io/nethermind/ethereum-client/configuration/pruning

If extra ops isn't working, can we expose the config file to edit through the dappnode UI?

pablomendezroyo commented 1 year ago

Hey! thanks for the suggestion. It looks like the way of triggering the nethermind prunning is through envs vars. We can easily implement the needed envs in the Nethermind package with a nullish value. Finally, you could set the value you want at the package config view and trigger the prune.

This should be implemented in all nethermind networks (goerli, gnosis and mainnet)

@tropicar thoughts?

CMTRACE commented 1 year ago

hey thanks for the reply - are you saying this is configurable now? sorry if i misunderstand

pablomendezroyo commented 1 year ago

No right now but it would be really easy to implement:

  1. Add the needed env to the setupwizard. https://github.com/dappnode/DAppNodePackage-nethermind/blob/master/setup-wizard.yml
  2. Add the env to the compose file https://github.com/dappnode/DAppNodePackage-nethermind/blob/fe0f6e2ed0bc2b8c128d6dc1773af5a5da99a40e/docker-compose.yml#L14
NelsonMcKey commented 1 year ago

Bumping this, because there seems to be quite a bit of demand from the community (and my validator is getting full!)...

@tropicar

ilcato commented 1 year ago

@tropicar, any ETA? I'm reaching ssd limit.

Archethect commented 1 year ago

Same here, would be cool to have it.

pablomendezroyo commented 1 year ago

@dsimog01 lets push this!

CMTRACE commented 1 year ago

Hey is there update on this? I think it would be critical to accompany the update with some help text describing an optimal set up for dappnode to balance disk usage and performance of the nethermind client.

It's very unclear from the nethermind docs what settings would be optimal for a home validator and can cause performance issues and poor user experience if set incorrectly

ilcato commented 1 year ago

@tropicar, @pablomendezroyo any news on this?

kamilchodola commented 1 year ago

Why EXTRA_OPTS not working? It should be as straightforward as setting those flags: --Pruning.Mode=Hybrid --Pruning.FullPruningThresholdMb=250000 --Pruning.FullPruningTrigger=VolumeFreeSpace I'm using Extra_Opts for passing data for Seq and Grafana and everything works smoothly so maybe this is just a matter of valid config?

dsimog01 commented 1 year ago

@kamilchodola What do you think of adding a boolean env ENABLE_PRUNNING that adds these flags?

CMTRACE commented 1 year ago

@kamilchodola What do you think of adding a boolean env ENABLE_PRUNNING that adds these flags?

I'd argue for freedom to configure different settings.

For example the recent nethermind update has higher performance with more ram assigned as cache but with diminishing returns.

So the cache size needs to be configurable at a minimum. Also the number of cores used for pruning should be configurable as to avoid saturating the cpu and negatively impacting attestations etc.

If there isn't going to be a preset default that is optimal for dappnode hardware, it should be configurable so people can test to see what's right for them (based on the nethermind documentation)

To avoid burdening the package authoring team with constantly ensuring all environment variables are mapped to the options fields in the UI, how hard would it be to have a way for people to edit the compose file of a live package so they are able to add non-default fields themselves?

kamilchodola commented 1 year ago

@dsimog01 Agree with @CMTRACE - better to just ensure that passing of flags work. Pruning should be well configured according to each user needs, with some additional fine tuning etc.

dsimog01 commented 1 year ago

@dsimog01 Agree with @CMTRACE - better to just ensure that passing of flags work. Pruning should be well configured according to each user needs, with some additional fine tuning etc.

I agree. Can you do the necessary changes to allow this?

CMTRACE commented 1 year ago

Hi there bumping this issue. Pruning has again been revamped in the latest clients since this issue was opened almost a year ago.

Every time someone needs to free disk space they will ask in the support channels how to prune.

It's simply not supported in dappnode because end users can't add fields themselves in the UI

And/or the package wrapper does not expose these fields

End users are forced to drop their execution client chain db entirely and wait for a complete sync. This takes many hours (about 6-8 on my gen 10 Nuc with default settings) and terrible user experience.

NelsonMcKey commented 1 year ago

Timely. I tried to pass configs through EXTRA_OPS.

Clearly something didn't stick, because I just hit 100% disk space and now i'm halfway through a multi-day downtime during a client hot swap. Not the end of the world, but i'd rather avoid it.

jakobhes commented 10 months ago

Any updates? I'm running out of disk space...

heueristik commented 7 months ago

Can you please fix this? This is an issue every user will face

rick0ff commented 4 months ago

Are there any updates? On discord I read a couple of times, that pruning is enabled by default for Nethermind. I did not however findout how to check if it really is enabled or what parameters are used by default (when the pruning takes place, thresholds, ...).

kamilchodola commented 4 months ago

CLARIFICATION: By term "Pruning" in Nethermind you can refer to two things:

  1. InMemory Pruning (enabled and running by default) - keeps block data in memory and when flushing to disk after some time it flushes only most important state related data. Goal of it is to reduce the DB growth over time.
  2. Full pruning (enabled BUT NOT RUNNING by default) - executes a full online prune which brings back the state DB size to a size similar to freshly synced DB.

FullPruning enabled but not running means that it is enabled but in MANUAL mode - you need to execute json_rpc command (https://docs.nethermind.io/interacting/json-rpc-ns/admin#admin_prune) to start it. In case one wants to make it automatic, needs to add proper extra flags to startup command to make it work fully automatically - please refer to this article: https://docs.nethermind.io/fundamentals/pruning

About adding some extra configs - we could potentially add some but not sure how helpful those potentially would be, will take a look on that.

Best idea so far I have is to add a two dropdown selections in "Config" tab with two fields:

  1. FullPruningTrigger (default Manual, possible to select FreeDiskSize or StateSbSize)
  2. FullPruningThreshold (default 256000, possible to change it based on needs, would be only taken into consideration if first is changed form manual to any other).
rick0ff commented 4 months ago

Thanks @kamilchodola for the clarification. My take is, that the full pruning is meant for this issue, so was my comment. I take it, a solution for easy full pruning via web UI is not yet planned or the solution design not yet final?