ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
15.9k stars 2.98k forks source link

IPNS garbage collection #7733

Open Mrcrypt opened 3 years ago

Mrcrypt commented 3 years ago

As far as I can see in all IPNS entries get stored in the MapDatastore of the DHT, but old entries won't be removed once they reached the end of life. This could lead to a lot of memory usage and potentially crash the node.

In the DHT Datastore the max record age is 36 Hours, but are only removed on trying to receive the entry. If an old entry is not asked by any node it will not be removed, this can lead to a buildup of a lot of entries.

I propose that all IPNS entries should be garbage collected in a regular interval and removed if their end of life is reached, which can be way earlier than the MaxRecordTime of the DHT.

welcome[bot] commented 3 years ago

Thank you for submitting your first issue to this repository! A maintainer will be here shortly to triage and review. In the meantime, please double-check that you have provided all the necessary information to make this process easy! Any information that can help save additional round trips is useful! We currently aim to give initial feedback within two business days. If this does not happen, feel free to leave a comment. Please keep an eye on how this issue will be labeled, as labels give an overview of priorities, assignments and additional actions requested by the maintainers:

Finally, remember to use https://discuss.ipfs.io if you just need general support.

bertrandfalguiere commented 3 years ago

This should be configurable

aschmahmann commented 3 years ago

As far as I can see in all IPNS entries get stored in the MapDatastore of the DHT

Where do you see this? The DHT's internal record store is backed by the your IPFS node's datastore so it will persist the records to disk.

@Stebalien can we trivially fix this trivially for badger by just having the DHT try to use the TTLDatastore interface (note: probably requires updating the MountDatastore to support the TTLDatastore interface)?

It'd be nice if the DHT garbage collected expired records, and it'd be nice if we had a generic TTLDatastore wrapper we could use for datastores like LevelDB that don't natively support expiring records. I suspect @gammazero's ongoing work on making it easier to create new indices on top of a key-value store should make this easier.

Overall, I'm pretty sure this isn't currently a big deal in the scheme of things given that IPNS DHT records are pretty small certainly compared to storing any actual files. I'd be happy with the above solution though.

Stebalien commented 3 years ago

@Stebalien can we trivially fix this trivially for badger by just having the DHT try to use the TTLDatastore interface (note: probably requires updating the MountDatastore to support the TTLDatastore interface)?

I'm not entirely sure how to make this work, but this seems like the right approach. We could also just run a scan a few times a day. We'll need a scan anyways to add rebalancing support anyways as far as I know.

Overall, I'm pretty sure this isn't currently a big deal in the scheme of things given that IPNS DHT records are pretty small certainly compared to storing any actual files. I'd be happy with the above solution though.

I agree. Although we'll need to fix it eventually.