HelloZeroNet / ZeroNet

ZeroNet - Decentralized websites using Bitcoin crypto and BitTorrent network
https://zeronet.io
Other
18.31k stars 2.26k forks source link

Feature request: Selfdestroying / temporary updates #1541

Open trenta3 opened 6 years ago

trenta3 commented 6 years ago

What

Allow users and site owner to specify how long a .json file should be retained. After such amount of time, an automatic deletion should be performed by each ZN client.

Use cases

In ZeroMail send each mail in a separate file and globally set them to be canceled after e.g. 1 month. This allows the other party to receive the email, and with the archival plugin to have it permanently, while at the same time not cluttering the global hard disk space of everyone.

This could be the scaling solution for all sites where users put some data just to rely on their continuous presence online, but the data is only useful for some of the other users or for a Limited amount of time (for example a user run news site or a board Like zeroboard or for temporary notes on nullpaste)

Possible problems

There should be a clock that is agreed upon or one can simply let each client use it's own (in ZN code there is already some points where it is assumed that the client clocks are not too far from each other, i.e. Within one day)

vRobM commented 6 years ago

I had a similar thought about having a rolling buffer which first followed a size limit then a time limit.

Anything outside of those limits is purged.

That way personal ZN sites (desktops/mobile/IoT) with limited disk space can set a fixed amount and not worry, and larger nodes and public proxies can hold a lot more data and not fail when they run out of space. (Decrease maintenance)

Fresh content is always the most useful. Stale data doesn't need to be spread around to everyone.

The fetch should prioritize fresh content (which most apps already do) and older content will have to come from nodes that have it.

Thunder33345 commented 6 years ago

remember that timestamp received by others should NEVER be trusted

possible "attack": a node configured to keep all files and even including expired files and keep sharing them effectiveness: mild annoying, unless your data relied on the fact it will self destruct cost: data storage space mitigation: dont rely on nodes destroying said data

possible suggestion: we could include a prunetime in the file: of content.json

...
"files": {
 "path/to.file": {
  "sha512": "...",
  "size": "...",
  "prune_time": "when the file should stop propagating through the network"
  }
}
...

prunetime will be current sending time + prunelock(user configured), since this is spoofable it shouldnt be relied by siteowner, and notset = file stays indefinitely this eliminate the need of more data field to hold data that wont be pruned ex profileinfo.json, profilepucture.png or need for data field to store files with different prunetime

this would be easier as we can just assume if time>prunetime, the file should be ignored/left alone and dont attempt to fetch or share said file, and you CAN(remember peers have 0 obligation that they MUST delete it) delete it now if you need space the signer will come to it eventually to delete any traces of old files

drawback: siteowner cannot impost prunetime due to how the prunetime is generated by enduser+their time

possible alternative use case: auto pruning/ephemeral post like for example you could setup zerome to automatically delete anything that's > 1 week old

problem arise is most site framework stores everything in one bulky .json thus making said feature not possible to prune and trying to edit json would be more tricky for the signature to stay valid like signature that "activates" after X time and file get replaced with another after X time?

from what i see this will be backward compatibility breaking an old peer will probably start to reject peers for out of storage space due to not considering prune_time expired files, and will try to fetch expired files

lezsakdomi commented 5 years ago

What is when the receiver's computer is not up and running at the given timeframe?

Anyways, the receiver can delete the file, sign it's content.json and deploy the reduced storage...

@trenta3 Can you imagine any other use case?

Thunder33345 commented 5 years ago

assuming when the "thing" time's up, every node will prune it off themself automatically unless they explicitly choose to keep it(remember it's a bad idea to use this for security, we should consider on the approach of saving data space)

users can have forums/zerome post auto prune, and mark useful ones to keep etc to reduce space and make it somewhat ephemeral