Cameron-IPFSPodcasting / podcastnode-Start9-wrapper

IPFS Podcasting client/wrapper for Start9/embassyOS
GNU General Public License v3.0
1 stars 0 forks source link

Add config option to limit disk usage #1

Open 501st-alpha1 opened 1 year ago

501st-alpha1 commented 1 year ago

I'd like to be able adjust the disk usage of the IPFS Podcasting service via. the StartOS UI. I poked around in the IPFS web UI and managed to determine that by default it is limited to 10GB, which is good enough for now, but I would like to be able to adjust that without having to find a config file to edit manually. (I haven't created an account on https://ipfspodcasting.net/ yet, but it should be possible to adjust this without having an account.)

Cameron-IPFSPodcasting commented 1 year ago

Before Start9 (in Umbrel), there was a config setting to adjust the "StorageMax" variable in IPFS until I learned it was "just a suggestion". The value doesn't hard limit the amount of disk usage. No matter the setting, it will fill your disk as long as you keep pinning files.

I've implemented a feature to prevent a disk full condition, but still working on a user configurable setting combined with automatic garbage collection (cleanup) to better manage disk usage.

501st-alpha1 commented 1 year ago

Before Start9 (in Umbrel), there was a config setting to adjust the "StorageMax" variable in IPFS until I learned it was "just a suggestion".

Ah, that is unfortunate.

as long as you keep pinning files.

To clarify, this service will pin any podcast episodes chosen to download (not just for shows that I have favorited), correct?

My main concern here is to limit the amount of data used by random episodes returned from work requests. If I favorite a podcast, then I'll probably have a decent idea of how much space that will use. Of course, if it's easier to just have one setting for the total amount of space allowed (perhaps if IPFS upstream fixes that issue), that is fine too.

As it is right now, do you have a general idea of how much disk space this service will use for random work-request pins? Will it just depend on how many files are offered by the ipfspodcasting.net API?

Cameron-IPFSPodcasting commented 1 year ago

I'm planning to provide a "usage slider" to adjust the percent of disk usage available for IPFS. Once you reach your limit, The server would stop sending work until other shows have expired and/or your usage drops below the limit. The server would also send the occasional "Clean up" command to purge out old unpinned files.

Technically, the client will pin anything sent by the server. The server scans all the feeds & determines what should be pinned/unpinned. The client asks for work and gets a task (if any).

For anonymous nodes, usage is minimal. Your node is only sent episodes that are using the free/48-hour hosting. Currently, an episode is only sent to 2 nodes and expire after 48 hours without a download/play.

So with ~70 nodes active right now, you have a 2-in-70, or 3% chance your node will get the task when the episode is published. I looked up the nodes with no favorites (random/anonymous nodes only), and the average number of shows per node was between 5 and 6.

If you had an account on the website ; ) You could see your pinned files and when they expire. These are the 48-hour/free files I have on my node. Untitled

Still have to implement automatic garbage collection, but if you run a manual "Clean up" once in a while (from the Start9 Actions menu), your IPFS datastore should be under 1GB.

501st-alpha1 commented 1 year ago

usage slider

Sounds great!

For anonymous nodes, usage is minimal. Your node is only sent episodes that are using the free/48-hour hosting. Currently, an episode is only sent to 2 nodes and expire after 48 hours without a download/play.

So with ~70 nodes active right now, you have a 2-in-70, or 3% chance your node will get the task when the episode is published. I looked up the nodes with no favorites (random/anonymous nodes only), and the average number of shows per node was between 5 and 6.

Make sense, thanks for info!