ipfs / kubo

An IPFS implementation in Go
https://docs.ipfs.tech/how-to/command-line-quick-start/
Other
16.03k stars 3k forks source link

Keep others’ IPNS records alive #1958

Open ion1 opened 8 years ago

ion1 commented 8 years ago
ipfs name keep-alive add <friend’s node id>

Periodically get and store the IPNS record and keep serving the latest seen version to the network until the record’s EOL.

ghost commented 8 years ago

You'll be able to pin IPNS records like anything else once we have IPRS

ion1 commented 8 years ago

Awesome

koalalorenzo commented 6 years ago

Waiting for this feature 👍

Falsen commented 6 years ago

But doesn't it make more sense if they are automatically pinned by nodes? Or would it be resource heavy,?

koalalorenzo commented 6 years ago

Consider that if pinned those have to be updated constantly via signatures etc etc...

Stebalien commented 6 years ago

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key. We expire them because the DHT isn't persistent and will eventually forget these records anyways. When it does, an attacker would be able to replay an old IPNS record from any point in time.

lockedshadow commented 5 years ago

When it does, an attacker would be able to replay an old IPNS record from any point in time.

Is it really considered more dangerous than possibility of practically disappearing whole materials published under certain IPNS key if one (just one!) publisher node with its private key once disappears too? Doesn't this publisher node look like the central point of failure? Outdated, but valid records are really worse than no records at all?

I think that ability to replay is not an critical security issue, at least in condition that user is explicitly notified that the obtained result could be outdated. After all, «it will always return valid records (even if a bit stale)», as mentioned in 0.4.18 changelog.

So what do you think about --show-publish-time flag on ipfs name resolve command? Do the IPNS records itself contain this data?

Stebalien commented 5 years ago

@lockedshadow I've been thinking about (and discussing this) this and, well, you're right. Record authors should be able to specify a timeout but there's no reason to remove expired records from the network. Whether or not to accept an expired record would be up to the client.

T0admomo commented 2 years ago

@Stebalien What is the best way to go about introducing this change to the protocol?

aschmahmann commented 2 years ago

@T0admomo since this is a client and UX change rather than a spec one mostly I would propose what the UX should be along with the various changes that would need to happen in order to enable it.

Some of the work here is in ironing out the UX and then there's some in implementation. By discussing your proposed plan in advance it makes it easier to ensure that your work is likely to be reviewed and accepted.

Some related issues: #7572 #4435 #3117

2color commented 2 years ago

The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key.

According to the IPNS spec, the signature contains the concatenated value, validity, and validityType fields.

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

Moreover, since validity is controlled by the key holder when they sign the record, they have the flexibility to pick any validity at the potential cost of users getting an expired/stale record (in the case of a new record published within the validity period that isn't propagated to all nodes holding the previous one). This is arguably better than getting no resolution as pointed out by @lockedshadow

Am I understanding this correctly?

bertrandfalguiere commented 2 years ago

That means that as long as validity is in the future, there's no reason why nodes wouldn't republish the IPNS record.

I think this could be an attack vector as a malicious node could publish a lot of signed records with near infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

So other clients needs to reject very old records, even if the original publisher wanted them to have very long validity.

(An attacker could also spawn many nodes and publish records from them, with the same effect)

2color commented 2 years ago

I think this could be an attack vector as a malicious node could publish a lot of signed records with infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out.

I recently read that DHT nodes will drop stored values after ~24 hours, no matter what Lifetime and TTL you set. So it's not really possible to clog the DHT or use this as an attack vector.

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

(An attacker could also spawn many nodes and publish records from them, with the same effect)

I believe that this is what Fierro allows you to do, though without any malicious intent.

bertrandfalguiere commented 2 years ago

As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).

Yes, you're right. Droping records is not based on age, I oversimplified. The point is that they are not in the DHT after some time if they are not republished, so they can't accumulate.

I believe that this is what Fierro allows you to do, though without any malicious intent.

Yes, but since records are droped by clients after abiut 24 hours, they still can't accumulate

cornwarecjp commented 3 weeks ago

When keeping someone else's IPNS record alive, what do you do when you learn about a new record for the same name? I see these possibilities:

An IPNS record is typically of little use without the data to which it points. I guess, in many applications, someone keeping the IPNS name alive might also want to (recursively) keep the pointed-to data alive ("recursive pinning"). If you've recursively pinned a name, and you receive an update for that name, that'd make you unpin the old pointed-to data, and pin the new pointed-to data. One potential issue with this is that the new data might be arbitrarily large, and therefore much larger than the storage space you'd be willing to spend on it. "Pinning the record" does not have this issue.

There are applications where receiving old data isn't harmful, and where receiving old data is always better than receiving no data. For such applications, "pinning the record" might be the preferred choice, in combination with an application process that gets to decide what to do with a record update. It might, for instance, make an application-level choice to pin only certain parts of the pointed-to DAG, to stay below a storage quotum. And only once the pointed-to data is (partially) downloaded and pinned, the application will replace the old pinned-to record with the new one.

cornwarecjp commented 2 weeks ago

As a poor man's solution, wouldn't it be possible to have an application run besides Kubo, which periodically polls Kubo for the name? If I understand correctly, Kubo caches objects for 24 hours after the last time they were touched, so if the application asks, say, every 12 hours for the name, it'll always stay in Kubo's cache.

As a bonus, the application could store a copy of the latest record received for the name. If Kubo somehow still loses the name, the application can re-upload the last-known record to Kubo[*]. This would double the storage requirement for names, but name records shouldn't be that big.

[*] apparently /api/v0/routing/put or ipfs routing put can do that, see #10484