owncloud / ocis

:atom_symbol: ownCloud Infinite Scale Stack
https://doc.owncloud.com/ocis/next/
Apache License 2.0
1.36k stars 179 forks source link

Cleanup for storage-users uploads #9797

Open JonnyBDev opened 1 month ago

JonnyBDev commented 1 month ago

Is your feature request related to a problem? Please describe.

Our users are uploading big files and sometimes their uploads will fail due to bad internet connection / closing of the notebook. This will save the files in the folder /storage/users/uploads. We had a full disk last weekend because one user uploaded some big videos and the upload got interrupted. We are using external storage for spaces (S3). Due to the usage of this storage engine, we did not equip the VM with much hardware like huge disks. Because multiple uploads got interrupted, out disk went to 100% and we couldn't do anything. Not even running the clean command in the container because the disk was full. After watching our monitoring graphs, we've seen a steady linear increase over the last two months. This could have been prevented if we had some kind of mechanism to automatically clean non-processing and expired uploads. See for more information on that topic

Describe the solution you'd like

One maintainer gave a example of a fix for this. A fix would be a go routine, like used for other things, to periodically run the clear command for non-processing and expired files. Being able to configure the job would be the cherry on top.

Describe alternatives you've considered

One alternative would be to run a cronjob on the actual system to exec into the container and run the command.

Additional context

See for more information on that topic

mmattel commented 1 week ago

Also see: #9962 (Patch Release 5.0.7)

MichaelSasser commented 1 week ago

I had the same issue, though, I had alerts in place that warned me about it early, and I was able to clean the local user storage manually by running the command: ocis storage-users uploads sessions --clean. I think you can just take a look at the files in question by omitting the --clean.

Having some kind of retention mechanism that kicks in if the files expire after a configured time or when the storage grows larger than a configured storage limit would really be appreciated.

One tip, I got from a retired sysadmin some time ago: If you create a dummy file (i.e., with dd), just 10 GB in size or so, you can delete it if you ran out of storage and deal with your problem, instead of being overwhelmed with the things you can't do anymore with a full disk. That works too, with a swapfile on the same partition (just remember to re-create it).