Open digital-spinner opened 7 months ago
I won't attach logs from client, because the archive itself is 240 MB lol.. And I like to avoid leaking my private info.. I believe this is easily reproducible issue.
@digital-spinner maybe you could also explain why this is an issue ?
For 20000 fragments it is about 0.125 MB per chunk which is insane as for the 2.5GB file. For SSDs it may not be a huge problem (at least for those with more IOPS capabilities) but for HDDs it means 20000 heads jumps for an example file. Such layout is not performant and wears the hardware prematurely if defrag is not applied - usually Windows defrags disks every week? month? - so the defrag process itself will take a very long time to align such files. Moreover SMR disks should suffer even more because of this issue - because each small fragment write may force the drive to realign / rewrite overlapping [SMR] sectors - this should actually stall the drive completely in some circumstances to a crawl speed (less than 0.5 MB/sec of write speed) - I saw this issue with SMR drives many times.
I guess the file should be allocated and locked for write time, and currently the files are just appended each time by very small data amount right?
Ok... I'm ditching this solution. Not even it eats additional power on my workstation when doing mostly nothing it even eats power on my server on idle than it is slow as nothing else and it is so much cpu demending lol. Seafile is 6x+ times faster and it is sitting nicely on idle not screwing my power bill, not to mention Syncthing and I don't even need and want all that additional apps which comes in additionally. It looks like, the performance and reliability is not the priority here. Having millions of file fragments after sync is just one of its problems from what I see.
Hi, I got the same Problem. The drive is with 99% defragmented. When I run the syncronisation the drive is to 100% in charge with writing the junks on the disk. The Perfomance goes so slow. I have 300Mbit download speed and the drive slows down unter 1MB/ sec. because it writes all the time this little junks. I let run now as first solution the WINcontic Defragmenter. After 2 days I have still 40000 deframented files on the drive. I habe 3.5TB of files on the server. Most files are larger then 10MB. I have never seen so much defragmentation on a drive. The drive has 5TB Size and is exclusiv for the Date from the Nextcloud reserved.
@starmanager01 the only way to do the defragmentation effectively in such circumstance is to COPY (do not move - it will be slower due to often MTF rewrites) the data to some other drive than move it back to a place where you like it to be (after cleaning the destination of course). I would not expect to see a fix for this issue, specially when having such questions about "why the fragmentation is an issue", but this is not usual fragmentation. I have moved to the Syncthing which is blazing fast and I was also using the Seafile in the past which is 6x times faster then Nextcloud. Both are not causing such ridiculous issues. And the people are asking why the Nextcloud is such slow......
I would see a solution. If they would use the windows drive like c:\users\$username$\appdata\xxxxx. Here they could unpack the files, build them togethter and copy the files to the endpoint and then delete the unpacked files in the drive c:\users\$username$\appdata\xxxxx . So this would help with the fragments. It takes me now more then 5 days to defrag the files so the drive will perform again.
@starmanager01 no it is no solution at all, it will cause more wear on SSD / HDD only. The solution is to have a file preallocation on target when writing which is nonexistent now as I guess.
Ok. So I will wait for the solution. Until this time I do defrag my files after synchronising.
So still nobody even looked at this issue? Now I see why the performance and reliability isn't even low priority in this project..
I do give up with all this Issues in the client. It did kill me more then 1000 files and write them to size 0 on the server and the client. Other Issue but the same client.
Huh? Try the Seafile solution. They have other issues with upgrade-ability etc. but Seafile worked for me like 7 years always blazing fast without any issues until I had to upgrade to mysql backend from sqlite. Super fast and efficient. Nextcloud is heavy, needs a lot of power to run and does stupid things in the background. Currently I'm just running Syncthing (which also has it's own issues with large sync directories at least for me) but in overall it suits me the best.
Thanks for your Idea. I am using now my Synology and backup from the NAS to the disk. There is not one file defragmented or missing. So I have a backup and all files are local and in the cloud. Before I did loose more then 2000 music files between the client on windows and the cloud. They got all the size 0kb. The client is not really ready for such a big amout of files. With the client on the smartphone it works like a charme.
⚠️ Before submitting, please verify the following: ⚠️
Bug description
Desktop client writes the files on disk causing extreme fragmentation (eg. 20000+ fragments on 2,5 GB file).
Steps to reproduce
Expected behavior
Fragmentation should not be so huge. Expected max few fragments per large file.
Which files are affected by this bug
all files
Operating system
Windows
Which version of the operating system you are running.
Windows 11 PRO
Package
Other
Nextcloud Server version
28.0.3
Nextcloud Desktop Client version
3.12.1
Is this bug present after an update or on a fresh install?
Fresh desktop client install
Are you using the Nextcloud Server Encryption module?
Encryption is Disabled
Are you using an external user-backend?
Nextcloud Server logs
No response
Additional info
No response