Closed zipiju closed 1 month ago
Didn't realize, but this node has pieces.delete-to-trash: false
configured, which might explain why the stats aren't updated but trash is being created.
Most likely related to https://github.com/storj/storj/issues/7012.
I have a node on v1.109.2 for about 2 days, and I can confirm what OP is seeing. GC collects the garbage and pieces, but does not update the trashed space at all.
I have also updated a node that has the STORAGE size less than its current used space so that no ingress is received, and I have observed that the used space is not being decreased during GC either.
I do not believe it is related to pieces.delete-to-trash: false
, as my nodes have this as default.
It may be related to 2f1eb2d or 3276989
This issue has been mentioned on Storj Community Forum (official). There might be relevant details there:
https://forum.storj.io/t/v1-107-3-as-minimum-on-https-version-storj-io/27338/16
This issue has been mentioned on Storj Community Forum (official). There might be relevant details there:
Configured the node to start with lazy filewalker and also disabled delete-to-trash. Lazy startup filewalker finished an hour ago, the used space went down, the trash however is still sitting at 0 bytes despite there is at least 1TB of trash:
# df -B 1000000000
Filesystem 1GB-blocks Used Available Use% Mounted on
/dev/mapper/storj 19842 17980 1863 91% /mnt/storj-data
2024-07-30T10:36:14Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-30T10:36:14Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-30T10:36:14Z INFO lazyfilewalker.used-space-filewalker subprocess started {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-30T10:36:14Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Process": "storagenode"}
2024-07-30T13:07:13Z INFO lazyfilewalker.used-space-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S"}
2024-07-30T13:07:13Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12EayRS2V1kEsWESU9QMRseFhdxYxKicsiFmxrsLZHeLUtdps3S", "Lazy File Walker": true, "Total Pieces Size": 6099783546796, "Total Pieces Content Size": 6081001932204}
2024-07-30T13:07:13Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-30T13:07:13Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-30T13:07:13Z INFO lazyfilewalker.used-space-filewalker subprocess started {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-30T13:07:13Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Process": "storagenode"}
2024-07-30T13:12:03Z INFO lazyfilewalker.used-space-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6"}
2024-07-30T13:12:03Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "121RTSDpyNZVcEU84Ticf2L1ntiuUimbWgfATz21tuvgk3vzoA6", "Lazy File Walker": true, "Total Pieces Size": 255566681856, "Total Pieces Content Size": 255031520000}
2024-07-30T13:12:03Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-30T13:12:03Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-30T13:12:03Z INFO lazyfilewalker.used-space-filewalker subprocess started {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-30T13:12:03Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Process": "storagenode"}
2024-07-30T13:32:51Z INFO lazyfilewalker.used-space-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs"}
2024-07-30T13:32:51Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "12L9ZFwhzVpuEKMUNUqkaTLGzwY9G24tbiigLiXpmZWKwmcNDDs", "Lazy File Walker": true, "Total Pieces Size": 1765576535040, "Total Pieces Content Size": 1763352852480}
2024-07-30T13:32:51Z INFO pieces used-space-filewalker started {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-30T13:32:51Z INFO lazyfilewalker.used-space-filewalker starting subprocess {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-30T13:32:51Z INFO lazyfilewalker.used-space-filewalker subprocess started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-30T13:32:51Z INFO lazyfilewalker.used-space-filewalker.subprocess Database started {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Process": "storagenode"}
2024-07-30T22:56:25Z INFO lazyfilewalker.used-space-filewalker subprocess finished successfully {"Process": "storagenode", "satelliteID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE"}
2024-07-30T22:56:25Z INFO pieces used-space-filewalker completed {"Process": "storagenode", "Satellite ID": "1wFTAgs9DP5RSnCqKV1eLf6N9wtk4EAtmN5DpSxcs8EjT69tGE", "Lazy File Walker": true, "Total Pieces Size": 7803492944384, "Total Pieces Content Size": 7786844273152}
# date
Wed Jul 31 00:04:02 UTC 2024
# ls /mnt/storj-data/data/storage/trash/pmw6tvzmf2jv6giyybmmvl4o2ahqlaldsaeha4yx74n5aaaaaaaa/2024-07-29/mm
225polsc2vu5urrmal72hdck2gw5omj4co2sqbl57w3tojgheq.sj1 ezqllhjeri4ltd3rpg5ylubfceuwqosfcdivh2l4nfm7i3uozq.sj1 orrnnjtiiirh5mqbsx5pesakz6hi6haej3sfzys56y2fambobq.sj1
24alkfgzggeasucpqt5s2glue2vtr6bqhzepn3klusw7btdu5q.sj1 f4nsno33vo2t2qi2vrczpwkrhanpmctsu6tcnkemnszor4t3sa.sj1 otsbtcwhoe65fxy2i7bkohajw5yrco4nu55r4bljftpvb6zm7q.sj1
25zn64vgh3sv5zwwiin5q26h76hebsfydmwgt2fu75yb6ret6a.sj1 fa7ewusbe35rrhxpx37sg6luotto2miam4fm2vpczxcumzwabq.sj1 ovsdhsegtnn3mxv6sku5lx2zknefcbe3szepfjj37mh3ksvo2q.sj1
27xxlbbxlckbuhlhw4kaobb4tgtlirzcbqu3axry6t5oyazwya.sj1 fab6lf5ijjeiklfwhutxvbyqhz5aofc3qfm2x7z62yvm2g52xa.sj1 ow7puy45kuooy5kmh5etrfwh7azjsnqeci3sdge7mu7c36umkq.sj1
2dakofjd23rctjie3xzioyrhjsnljp2fbgflkxf2wynqhvozeq.sj1 fc3yw7wc5aycwbd774anl5ssid5ur52nzviyaz4ahs26ax6hwa.sj1 oy6yqlvczwpbhjbydsqi3luayqt772qmpmkf7sksflbozqj42q.sj1
2fbi3ywv6gr2eijj3asokp6mpg5sqp2eclyemks6dfjtph62fq.sj1 fdd5iujpf63uqx2eclkmcne45wofheqs57i2zl4nqflux3ruza.sj1 p353p2bsmrgzugzledhe3ycxq5v6a2jndcatakjdius2eogweq.sj1
2hnedlk4s5tx6vqynktq62jiu7ywxa4twbhdnq7rrmpt5qekrq.sj1 ffu6h74qrgmphsbtredi5o62ctymcqskvodyluz7m7yhngzvca.sj1 p4sayuiitpxrymyaewqvp7o7skoibupbybz6i2oqjhnlj5k2kq.sj1 ... etc.
After walking the satellites it also walks the trash which is not logged.
After walking the satellites it also walks the trash which is not logged.
I do not see that second process running anymore, but will check again tomorrow.
After walking the satellites it also walks the trash which is not logged.
I do not see that second process running anymore, but will check again tomorrow.
It just updated the stats. So startup filewalker is okay, but the GC process, it looks like, isn't updating the stats at all.
This issue has been mentioned on Storj Community Forum (official). There might be relevant details there:
https://forum.storj.io/t/1-109-2-trash-size-is-no-longer-updated/27379/2
To confirm, the same is happening now on multiple nodes that were updated to 1.109.2, so this isn't a one of or a problem just with that particular node.
This issue still appears in v1.110.0-rc
.
This issue has been mentioned on Storj Community Forum (official). There might be relevant details there:
Change storagenode/pieces: update trash size on retain mentions this issue.
Not sure another issue is warranted, but collector
does not appear to update the used space when deleting pieces either.
Not sure another issue is warranted, but
collector
does not appear to update the used space when deleting pieces either.
Thanks, I will check it and make separate ticket if needed.
@Computron010 are you still seeing this issue? I couldn't reproduce it on my end so maybe I'm missing something and will need more details.
@mniewrzal I think what I was seeing was coincidence of not having any expiring pieces to collect during my period of testing v1.109/v1.110. Right now on v1.110.2 the used space is indeed being reduced when the collector is running.
Thanks!
This issue has been mentioned on Storj Community Forum (official). There might be relevant details there:
https://forum.storj.io/t/when-will-uncollected-garbage-be-deleted/27327/236
Have updated 1.107.3 node to 1.109.2 because 1.107.3 got (as I understood from the debug endpoint) stuck on multiple retain calls and was using excessive CPU. After starting on the new version the node either received a BF or started processing the previously received ones. Monitoring the process I saw trash subfolders slowly being created and files appeared in them. Now after the lazy gc-filewalker is done, the node is still showing 0 bytes as trash. Might there be some issue with updating the database in this version? Databases are on SSD.
Saltlake:
US1: