Closed olizimmermann closed 1 year ago
Is this Issue still valid in NC21.0.2? If not, please close this issue. Thanks! :)
Is this Issue still valid in NC21.0.2? If not, please close this issue. Thanks! :)
Still valid.
Does it work after you run occ files:scan sz
and occ files:cleanup
?
Does it work after you run
occ files:scan sz
andocc files:cleanup
?
Nope. It doesn't work. In Nextcloud it still says: 18.2GB used, but the disk only uses 9.3Gb...
Hello,
It's a problem I discussed on the forums ... I use "Quota Warning" app in Nextcloud, but my problem is the opposite, for example in a NC 21.0.3 instance with a single user (he uses it to sync between 3 machines) the NC interface shows 94gb used and the "Quota warning" app reports 94% usage because I did set a 100GB quota for that user, so far so good.
Problem, if I check on the server itself (debian 10 up to date) the real usage is 152GB. I was explained that the quota do not account for the deleted files, the versions, etc ... so this makes user's quota completely useless, in this case I put a 100GB quota so the client is notified when reaching 90GB but on the server the client already reached 150GB ... This already lead to his NC beeing stuck because the server quota at 200GB was reached (and Debian blocked additional files) while the client thought everything was fine (he was seeing 15OGB usage).
I see the problem on all my NC instances (14), most of the time the storage as seen on the server largely exceeding the quota in NC ... I do not recall seeing the opposite like you do and generally I see the problem for all users not only one, so my problem might be different from you but I am continuously searching for new info on that problem so I thought I would follow this thread.
On an NC 24.0.2 instance (with only local storage) the NC 'Users' page shows disk usage that is way off for half of the users and (almost) correct for the other half.
Example: User 1: actual disk usage 151GB, displayed usage only 648.8MB; user 2: actual disk usage 376GB, displayed usage only 2.3GB
Neither occ files:scan --all
nor occ files:cleanup
fixes the issue.
This is on Linux with Apache 2.4.41 and php 7.4.3
+1 for this issue.
Nextcloud deployed from docker container (Image - nextcloud:24.0.3)
du -hs for ENTIRE Nextcloud data folder shows me 28,3 Gb, while in web interface for one user(which has most of this data) it is shown as 37,2 Gb used.
No external storage. Versions and trashbin plugins are enabled though
On an NC 24.0.2 instance (with only local storage) the NC 'Users' page shows disk usage that is way off for half of the users and (almost) correct for the other half. Example: User 1: actual disk usage 151GB, displayed usage only 648.8MB; user 2: actual disk usage 376GB, displayed usage only 2.3GB Neither
occ files:scan --all
norocc files:cleanup
fixes the issue.This is on Linux with Apache 2.4.41 and php 7.4.3
same here user 1 real usage: 19.8 GB -> nextcloud shows 2,9 GB user 2 real usage: 41.2 GB -> nextcloud shows 62 KB
Ubuntu 20.04.4 LTS, Apache 2.4.41, php 7.4.3, NC 24.0.3
I have run into something similar to the issue reported here, it may be the same thing. My user has about 250GB of data but the UI reports only 78MB used. I had run out of my quota at 250GB and it was reporting I could not upload any more, but it was saying 78MB/78MB used. When I upped the quota to 500GB it now reports 78MB/250GB used. I deleted a large file from my instance that would have very easily sent that 78MB number into the negative by a few GB, and it didn't move. The total briefly showed 500GB after that, but upon refreshing the page it returned to the previous value. Something is wrong, none of the DB repair or file scan occ commands do anything. I updated to 24.0.4 today and there is no change. Access to files is not affected, just reporting of space used.
i see the same issue with 24.0.4
one user has aroung 12gb data but nextcloud is reporting 164kb :D
Please fix that. This makes the quota feature really unusable
We are seeing something similar to this i believe. (24.0.3)
It seems the updating of Folder sizes after deleing is severely delayed or not happening at all. Like is "sticky" After deleting a file from a folder the folder size is not updated and the old size keeps counting against the quota. Looking at the same folder via Filesystem or Webdav shows them as 0kb. This is mostly affecting our main user, used for sharing out folders to the other users.
In our case it's a shared Hetzner instance. They have no solution and cant reproduce it on their test system. They basically gave up and suggested to try to post on github and hope for the best.
I also see this issue, and it persists with NC 24.0.5:
du -hs ncdata/username
80G ncdata/username
We are seeing something similar to this i believe. (24.0.3)
It seems the updating of Folder sizes after deleing is severely delayed or not happening at all. Like is "sticky" After deleting a file from a folder the folder size is not updated and the old size keeps counting against the quota. Looking at the same folder via Filesystem or Webdav shows them as 0kb. This is mostly affecting our main user, used for sharing out folders to the other users.
In our case it's a shared Hetzner instance. They have no solution and cant reproduce it on their test system. They basically gave up and suggested to try to post on github and hope for the best.
We are also with Hetzner and the reported "GB used" value from within the web-interface is almost half from the actual space used according to the Hetzner console. As there is no access via terminal or a web-console I'm having a hard time to troubleshoot this. I suspect that there are leftovers from failed uploads somewhere in the file system as reported in other threads with similar topics. We tried deleting all individual as well as admin trash bins and looked through TBs of files to see if any excessive versioning was going on somewhere...without any success. Is there really no way to show the allocation of data in the web-interface? Or does anybody have any other idea what else we could try to get rid of this junk (almost 500GBs) that is obviously comfortably lurking around somewhere and blocking valuable space...without having access to the file structure or a command line?
TIA
We are seeing something similar to this i believe. (24.0.3) It seems the updating of Folder sizes after deleing is severely delayed or not happening at all. Like is "sticky" After deleting a file from a folder the folder size is not updated and the old size keeps counting against the quota. Looking at the same folder via Filesystem or Webdav shows them as 0kb. This is mostly affecting our main user, used for sharing out folders to the other users. In our case it's a shared Hetzner instance. They have no solution and cant reproduce it on their test system. They basically gave up and suggested to try to post on github and hope for the best.
We are also with Hetzner and the reported "GB used" value from within the web-interface is almost half from the actual space used according to the Hetzner console. As there is no access via terminal or a web-console I'm having a hard time to troubleshoot this. I suspect that there are leftovers from failed uploads somewhere in the file system as reported in other threads with similar topics. We tried deleting all individual as well as admin trash bins and looked through TBs of files to see if any excessive versioning was going on somewhere...without any success. Is there really no way to show the allocation of data in the web-interface? Or does anybody have any other idea what else we could try to get rid of this junk (almost 500GBs) that is obviously comfortably lurking around somewhere and blocking valuable space...without having access to the file structure or a command line?
TIA
So in my case it definitely is not leftovers from deleted files or versions. This is read data. I am hosting my own server, so I have full access to cli and database, but I'm not sure what info could help.
I am positive that this is just a problem in calculating the storage on the Nextcloud side, because both the user folder on the server as well as my locally synced data folder with the NC desktop client report > 70 GB of data being present.
I had the same issue. I ended up creating a backup of oc_filecache
, TRUNCATE TABLE oc_filecache
and running occ files:scan --all
. Now it looks better.
edit: Don't try this at home, kids. People - reported problems with this.
I had the same issue. I ended up creating a backup of
oc_filecache
,TRUNCATE TABLE oc_filecache
and runningocc files:scan --all
. Now it looks better.
Yeah, that definitely helps, just wonder why running 'occ files:scan --all' did not help WITHOUT truncating cache tables in database
As suggested by @MichaelSp and confirmed by @Pinkbyte the only way for me to fix this was to truncate the nc_filecache
table and then run files:scan --all
Thanks for sharing the workaround
Warning: Only truncate *c_filecache if you are not using encryption, otherwise all files are treated as plain unencrypted (and therefore unusable) afterwards!
When I truncate table oc_filecache
and run occ files:scan --all
, all my shares and favorites are gone. :(
Rather than truncating tables or other hacks I would really like to see a dev weigh in on this issue and fix it in the code.
using NC25 with the same issue!
Me, too. Even with NC25, the same issue occurs.
I had this too, the problem was that some of the size accounting was not up to date.
Background: to optimize size propagation, Nextcloud uses an algorithm that adds or substracts sizes across all parent folders of a modified path. If for some reason this didn't happen, for example if the PHP process got killed shortly after an upload, the oc_filecache will contain wrong values.
Now, the "occ files:scan" command doesn't automatically fix sizes. I'd say the "occ files:scan" command should be extended to also fix folder sizes since it's anyway going to go through all folders.
Thoughts ? @icewind1991
Hi, please update to at least 23.0.12 and report back if it fixes the issue. Thank you!
Nextcloud 25.0.2 with main storage encryption still shows wrong quota.
I experience the same issue on a 24.0.7 instance. Thanks for the explanation @PVince81. I agree that occ files:scan
should recalculate and update the sizes. Or at least we should provide an occ command to recalculate the file sizes.
Same here on 24.0.7.
same here on 25.0.1
Same issue with nextcloud 25.01
Is there any way to force Nextcloud to manually recalculate the used storage of users? On our instance, more and more users hit their quota due to this issue unfortunately (version 24.0.7).
Same issue with nextcloud 25.01
Not solved after updating to the latest version of nextcloud 25.02. I think it is a table related issues in DB
yes, we need to extend occ files:scan to recompute folder sizes as per https://github.com/nextcloud/server/issues/25283#issuecomment-1302340133
hmmm, I just tested locally with occ files:scan --all
after manually changing the sizes of folders and files in the database: after scanning the file sizes and folder sizes were fixed correctly
so I wonder if there's a specific file tree scenario where the size fixing doesn't propagate properly
I've noticed that the scan command is actually telling the scanner to reuse sizes, so to not recompute: https://github.com/nextcloud/server/blob/master/lib/private/Files/Utils/Scanner.php#L256
removing that flag might make it work, but I can't confirm since it worked for me with the flag set.
diff --git a/lib/private/Files/Utils/Scanner.php b/lib/private/Files/Utils/Scanner.php
index dc220bc710d..3474714508d 100644
--- a/lib/private/Files/Utils/Scanner.php
+++ b/lib/private/Files/Utils/Scanner.php
@@ -253,7 +253,7 @@ class Scanner extends PublicEmitter {
try {
$propagator = $storage->getPropagator();
$propagator->beginBatch();
- $scanner->scan($relativePath, $recursive, \OC\Files\Cache\Scanner::REUSE_ETAG | \OC\Files\Cache\Scanner::REUSE_SIZE);
+ $scanner->scan($relativePath, $recursive, \OC\Files\Cache\Scanner::REUSE_ETAG);
$cache = $storage->getCache();
if ($cache instanceof Cache) {
// only re-calculate for the root folder we scanned, anything below that is taken care of by the scanner
I've noticed that the scan command is actually telling the scanner to reuse sizes, so to not recompute: https://github.com/nextcloud/server/blob/master/lib/private/Files/Utils/Scanner.php#L256
removing that flag might make it work, but I can't confirm since it worked for me with the flag set.
diff --git a/lib/private/Files/Utils/Scanner.php b/lib/private/Files/Utils/Scanner.php index dc220bc710d..3474714508d 100644 --- a/lib/private/Files/Utils/Scanner.php +++ b/lib/private/Files/Utils/Scanner.php @@ -253,7 +253,7 @@ class Scanner extends PublicEmitter { try { $propagator = $storage->getPropagator(); $propagator->beginBatch(); - $scanner->scan($relativePath, $recursive, \OC\Files\Cache\Scanner::REUSE_ETAG | \OC\Files\Cache\Scanner::REUSE_SIZE); + $scanner->scan($relativePath, $recursive, \OC\Files\Cache\Scanner::REUSE_ETAG); $cache = $storage->getCache(); if ($cache instanceof Cache) { // only re-calculate for the root folder we scanned, anything below that is taken care of by the scanner
occ files:scan --all is not working. maybe it oc_filecache and oc_filecache_extended table issue. but what happens if I truncate these two tables? If I upload any new files into the folder then it shows the wrong file size. The folder size & the file size shows different values.
never truncate oc_filecache, you will lose all shares and metadata
you might be able to get an insights using this query:
select s.id storage_id, s.numeric_id storage_numeric_id, fc.parent parentfileid, fcp.path, fcp.size, sum(fc.size)
"size_of_chidlren" from oc_filecache fc left join oc_filecache fcp on fcp.fileid=fc.parent left join oc_storages s on s.numeric_id=fc.storage where fcp.path not like 'appdata_%' group by fc.parent having fcp.size != sum(fc.size) and fcp.size >= 0 order by fcp.storage, parentfileid;
this will find in oc_filecache all folder entries (parent) where the sum of their contents does not match the "size" column that was stored there
you'll find out what user might need to rescan, or at least the name of the folders which sizes seem unfixable if after running the rescan those folders are not fixable, you could maybe try and get a snippet of the tree for the given folder, to find out what state it's in
never truncate oc_filecache, you will lose all shares and metadata
you might be able to get an insights using this query:
select s.id storage_id, s.numeric_id storage_numeric_id, fc.parent parentfileid, fcp.path, fcp.size, sum(fc.size) "size_of_chidlren" from oc_filecache fc left join oc_filecache fcp on fcp.fileid=fc.parent left join oc_storages s on s.numeric_id=fc.storage where fcp.path not like 'appdata_%' group by fc.parent having fcp.size != sum(fc.size) and fcp.size >= 0 order by fcp.storage, parentfileid;
this will find in oc_filecache all folder entries (parent) where the sum of their contents does not match the "size" column that was stored there
you'll find out what user might need to rescan, or at least the name of the folders which sizes seem unfixable if after running the rescan those folders are not fixable, you could maybe try and get a snippet of the tree for the given folder, to find out what state it's in
ERROR 1055 (42000): Expression https://github.com/nextcloud/server/issues/1 of SELECT list is not in GROUP BY clause and contains nonaggregated column 'nextcloud.s.id' which is not functionally dependent on columns in GROUP BY clause; this is incompatible with sql_mode=only_full_group_by
This is the output after the statement
right, here's an adjusted query that might work on more databases types:
select s.id storage_id, s.numeric_id storage_numeric_id, fc.parent parentfileid, fcp.path, fcp.size, sum(fc.size)
"size_of_chidlren" from oc_filecache fc left join oc_filecache fcp on fcp.fileid=fc.parent left join oc_storages s on s.numeric_id=fc.storage where fcp.path not like 'appdata_%' group by storage_id, storage_numeric_id, parentfileid, fcp.path, fcp.size having fcp.size != sum(fc.size) and fcp.size >= 0 order by fcp.storage, parentfileid;
right, here's an adjusted query that might work on more databases types:
select s.id storage_id, s.numeric_id storage_numeric_id, fc.parent parentfileid, fcp.path, fcp.size, sum(fc.size) "size_of_chidlren" from oc_filecache fc left join oc_filecache fcp on fcp.fileid=fc.parent left join oc_storages s on s.numeric_id=fc.storage where fcp.path not like 'appdata_%' group by storage_id, storage_numeric_id, parentfileid, fcp.path, fcp.size having fcp.size != sum(fc.size) and fcp.size >= 0 order by fcp.storage, parentfileid;
@SudipBatabyal try: occ files:scan $userId
with those three users then run the query again.
if the sizes still don't match you can fix them manually:
update oc_filecache set size=256496927261 where fileid=189175
update oc_filecache set size=10285265176 where fileid=2158535
update oc_filecache set size=16446323621 where fileid=2288859
right, here's an adjusted query that might work on more databases types:
select s.id storage_id, s.numeric_id storage_numeric_id, fc.parent parentfileid, fcp.path, fcp.size, sum(fc.size) "size_of_chidlren" from oc_filecache fc left join oc_filecache fcp on fcp.fileid=fc.parent left join oc_storages s on s.numeric_id=fc.storage where fcp.path not like 'appdata_%' group by storage_id, storage_numeric_id, parentfileid, fcp.path, fcp.size having fcp.size != sum(fc.size) and fcp.size >= 0 order by fcp.storage, parentfileid;
This user has 555 MB of data but it showing 115 GB full.
@SudipBatabyal try:
occ files:scan $userId
with those three users then run the query again.if the sizes still don't match you can fix them manually:
update oc_filecache set size=256496927261 where fileid=189175
update oc_filecache set size=10285265176 where fileid=2158535
update oc_filecache set size=16446323621 where fileid=2288859
problem users not shown in this comment, problem users are different
@PVince81 I tried to follow your suggestions but also for me it didn't fix the issue yet. Here is what I checked and did:
smn
) has the issue, I have several hundred GB of data, but the UI shows 5.2MB used| home::smn | 9 | 1415 | | 91040690896 | 91040694638 |
| home::smn | 9 | 5240011 | files/Pictures/20221007_pics | 650503839 | 655178943 |
REUSE_SIZE
flag from lib/private/Files/Utils/Scanner.php
occ files:scan smn
One more observation, not sure if this helps in drilling down to the issue:
I ran occ files:scan --all
to rescan for all users, just in case. After that, the Mysql table still lists a few entries that don't seem to belong to a user (as in they are not in home::username
):
+------------------------------------------+--------------------+--------------+---------------------------------------------+----------+------------------+
| storage_id | storage_numeric_id | parentfileid | path | size | size_of_chidlren |
+------------------------------------------+--------------------+--------------+---------------------------------------------+----------+------------------+
| local::/var/www/owncloud/data/smn/ | 1 | 2 | | 44837593 | 59988341 |
| local::/var/www/owncloud/data/smn/ | 1 | 591 | files_versions/configs | 492328 | 481284 |
| local::/var/www/owncloud/data/testuser/ | 5 | 90 | | 0 | 18480 |
| shared::1659f1e9d3f11695dfde7e0068090a0e | 43 | 1753363 | output_20190411 | 13018878 | 19654772 |
+------------------------------------------+--------------------+--------------+---------------------------------------------+----------+------------------+
Interestingly, they seem to be all outdated old entries that never got cleaned up, because user testuser
does not exits on my instance anymore, and the data does not reside at /var/www/owncloud/data
anymore - probably never since I upgraded to the first version of Nextcloud right after the fork...
Could this be a reason? And is there a way of pruning these items from the database?
@SudipBatabyal try:
occ files:scan $userId
with those three users then run the query again. if the sizes still don't match you can fix them manually:update oc_filecache set size=256496927261 where fileid=189175
update oc_filecache set size=10285265176 where fileid=2158535
update oc_filecache set size=16446323621 where fileid=2288859
problem users are not shown in this comment, problem users are different
Not able to understand the issue whether it's a DB issue or a PHP issue with the cache file. Look into the data folder on the server side & it shows the right file & folder size but in nextcloud UI it is totally different. The current version is the latest version 25.02 & the issue continues from 25.0.0
@simonspa interesting. The code is not supposed to use those entries any more, but in case it does for whatever reason, you can check if manually changing the size value also changes the displayed quota for that user: update oc_filecache set size=59988341 where fileid=2
@SudipBatabyal the issue is that the size is not correct in oc_filecache. The web UI uses the values from the oc_filecache table, not from disk. Rescanning would normally tell it to refresh the value in the DB but it did not for you for some unknown reason.
@simonspa to delete those obsolete storage entries from the database, for you specifically:
I remove the REUSE_SIZE flag from lib/private/Files/Utils/Scanner.php
that's a good clue, I wonder if we can permanently remove this, if that works for everyone
I've made a PR with the change: https://github.com/nextcloud/server/pull/35748 will need to discuss with @icewind1991 if this is viable once he's back
in the meantime, it would be good if more people here could test that change:
occ files:scan --all
@SudipBatabyal I just noticed your comment
problem users not shown in this comment, problem users are different
ok, so if it's not in the database then your issue is a different one you can check oc_filecache for the problematic user and see what value of "size" is on the one where path='' and the one where path='files' of that user's storage, because that's what's used for the web UI.
are you on a 32-bit system ? there are known problems with bigger numbers there due to PHP limitations
I've made a PR with the change: #35748 will need to discuss with @icewind1991 if this is viable once he's back
in the meantime, it would be good if more people here could test that change:
1. apply the patch https://patch-diff.githubusercontent.com/raw/nextcloud/server/pull/35748.patch 2. run `occ files:scan --all` 3. rerun the query from [Nextcloud displays wrong used storage for one user #25283 (comment)](https://github.com/nextcloud/server/issues/25283#issuecomment-1346825930) to confirm that discrepancies are gone
Sadly, the patch doesnt seem to work in my case
How to use GitHub
Steps to reproduce
Expected behaviour
Tell us what should happen Should be 1:1 like all my other users.
Actual behaviour
Tell us what happens instead Real used storage: 9.3 Gb Nextcloud shows me: 18.2
Server configuration
Operating system: FreeBsd Web server: Nginx Database:
PHP version: 7.04 Nextcloud version: (see Nextcloud admin page) 20.0.5 Updated from an older Nextcloud/ownCloud or fresh install:
Where did you install Nextcloud from: TrueNas - Plugin - latest Update via Updater Signing status:
Signing status
``` Login as admin user into your Nextcloud and access http://example.com/index.php/settings/integrity/failed paste the results here. ``` No errors have been found.List of activated apps:
App list
Notes, Decks, Tasks ``` If you have access to your command line run e.g.: sudo -u www-data php occ app:list from within your Nextcloud installation folder ``` ---Nextcloud configuration:
Config report
``` If you have access to your command line run e.g.: sudo -u www-data php occ config:list system from within your Nextcloud installation folder or Insert your config.php content here. Make sure to remove all sensitive content such as passwords. (e.g. database password, passwordsalt, secret, smtp password, …) ``` CONFIG = array ( 'apps_paths' => array ( 0 => array ( 'path' => '/usr/local/www/nextcloud/apps', 'url' => '/apps', 'writable' => true, ), 1 => array ( 'path' => '/usr/local/www/nextcloud/apps-pkg', 'url' => '/apps-pkg', 'writable' => true, ), ), 'logfile' => '/var/log/nextcloud/nextcloud.log', 'passwordsalt' => 'XXXXX', 'secret' => 'XXXXX', 'trusted_domains' => array ( 0 => 'localhost', ), 'datadirectory' => '/usr/local/www/nextcloud/data', 'dbtype' => 'mysql', 'version' => '20.0.5.2', 'overwrite.cli.url' => 'http://localhost', 'overwriteprotocol' => 'https', 'dbname' => 'nextcloud', 'dbhost' => 'localhost', 'dbport' => '', 'dbtableprefix' => 'oc_', 'mysql.utf8mb4' => true, 'dbuser' => 'XXX', 'dbpassword' => 'XXX', 'installed' => true, 'instanceid' => 'XXX', 'twofactor_enforced' => 'false', 'twofactor_enforced_groups' => array ( 0 => 'admin', ), 'twofactor_enforced_excluded_groups' => array ( ), 'mail_smtpmode' => 'smtp', 'mail_sendmailmode' => 'smtp', 'mail_domain' => 'gmail.com', 'mail_smtphost' => 'smtp.gmail.com', 'mail_smtpport' => '587', 'mail_smtpsecure' => 'tls', 'mail_smtpauthtype' => 'LOGIN'..Are you using external storage, if yes which one: local/smb/sftp/... No Are you using encryption: yes/no No Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/... No
LDAP configuration (delete this part if not used)
LDAP config
``` With access to your command line run e.g.: sudo -u www-data php occ ldap:show-config from within your Nextcloud installation folder Without access to your command line download the data/owncloud.db to your local computer or access your SQL server remotely and run the select query: SELECT * FROM `oc_appconfig` WHERE `appid` = 'user_ldap'; Eventually replace sensitive data as the name/IP-address of your LDAP server or groups. ```Client configuration
Browser: all / also in app Operating system:
Logs
Web server error log
Web server error log
``` Insert your webserver log here ```Nextcloud log (data/nextcloud.log)
Nextcloud log
``` Insert your Nextcloud log here ```Shell:
Online: