Closed jsvisa closed 1 year ago
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
Hi, pruning is available as a sub command of the new bor cli with bor version v0.3.3. Please refer to the this post for more info (specifically the change log part). Hope this helps.
Also, just fyi, we don't support snap sync yet as it doesn't work out of the box for polygon mainnet. You might want to run it on a full sync node.
@manav2401 this is different though. geth/bor's pruning is stale state pruning, this is ancient block pruning.
@jsvisa +1, would love to see this (ideally with EIP-4444) implemented in bor
@petejkim Sorry, this not fully EIP-4444, it will prune the old historical data only, and not handle the p2p issues.
Yep, which is why I said "ideally"...because I'd like to see it happen.
yeah Im also looking for that, lets make it happen
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
/ping
Hi @jsvisa The team is currently working on a few high-priority releases, and we couldn't take this up. We will definitely try to look into this soon (I don't have a timeline though, maybe the later half of this month). Thanks for keeping your patience.
just have the feeling that this is going to take years to get it working.
@kmalloc I understand your concerns but I want you to know that we will take this up soon depending on availability. The team is working on some prior commitments.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue is stale because it has been open 14 days with no activity. Remove stale label or comment or this will be closed in 14 days.
This issue was closed because it has been stalled for 28 days with no activity.
Rationale
I'm running a new snap-sync node, downloading a fresh snapshot from https://snapshots.matic.today, after the syncing progress, found the local chaindata consuming 1.1/1.6TB are the ancient data, used toooomuch disk:
The old ancient data is useless in most cases, so if we support the ancient data pruning, we can use fewer disks.
Implementation
Seems the binance smartchain has supported this feature(merged in #543) maybe we can backport this feature into go-ethereum.