mate-desktop / caja

Caja, the file manager for the MATE desktop
https://mate-desktop.org/
Other
265 stars 143 forks source link

1.26.0: copying or deleting large amounts of files is very slow #1691

Open darealshinji opened 1 year ago

darealshinji commented 1 year ago

Expected behaviour

I delete a directory with over a thousand files inside (source tree, node_module stuff, chroot installation) and it shouldn't take much more than a second to do.

Actual behaviour

Takes several seconds. It's actually way faster to open a terminal and call rm -rf or cp -rf. I assume the window isn't updated fast enough or something?

Steps to reproduce the behaviour

Download something with more than a thousand files inside and try it out.

MATE general version

1.26.0

Package version

1.26.0-1

Linux Distribution

Ubuntu MATE 22.04

Link to bugreport of your Distribution (requirement)

https://bugs.launchpad.net/ubuntu/+source/caja/+bug/2001624

ChrisOfBristol commented 7 months ago

Using 1.26.3 on Fedora-39 - copying 200GB of data files between disks has already taken 24hrs, and it's getting slower. It takes less than 2 hours using Fedora's standard "Files". It's running from a terminal showing these errors repeatedly:

Glib-gio-critical 00:34:49.964 gfileinfo created without stsndard::sort-order 
                         ""                        Gfileinfofile ../gio/gfileinfo.c line2062 (g_file_info_get_symlink-target):should not be reached

Also some errors like this:

                                                   CAJA-IS-BOOKMARK (bookmark) failed
                                                   G_IS_FILE (file) failed
                                                   G_IS_OBJECT (object) failed
lukefromdc commented 7 months ago

This could relate to GVFS or metadata handling issues, and is also reported with Nautilus and Nemo. This level of slow could also be slow disk drives or connections. A USB 2 hard drive case for instance would never get over 29MB/sec.

How much faster would this job run with cp from the command line on your system?

thaarok commented 7 months ago

This is not related to a slow disk, I observe the same issue when coping/deleting big directories in Caja or Nautilus, in comparison with doing the same operation in the terminal. I already get used to that for moving big amounts of data I need to open mc in the terminal instead of using Caja/Nautilus.

For Nautilus it is well described in the following issue: https://gitlab.gnome.org/GNOME/nautilus/-/issues/1904 From the discussion it seems the deleting may be optimized by using unlinkat syscall, like rm -r do it.

However it is true this is probably more GVFS issue than issue of Nautilus/Caja/Nemo...

ChrisOfBristol commented 7 months ago

I gave up after about 36hours copying, as the number of hours remaining stayed the same. This was because the bitrate halved roughly every 8 hours. It was down to 1.5MB/s when I gave up. Zeno's paradox is real! It said there was 200GB to copy and "Disk Usage Analyser" thought there was 200GB too, but in fact there was only 150 according to Fedora "Files" and that eventually proved to be the case. The difference may have been in '.' files. I've just used Bash cp -rf /mnt/old/chris/* ~ it took 40 minutes and it was 150GB. So the problem is not related to disk speed, processor speed or memory size.

darealshinji commented 7 months ago

I'm now using Nemo 5.2.4 which seems to deal with large directories much better, especially when I just want to delete files. Disk speed can't really be an issue for me either, as I'm using SSD.

PS: if you want more details about progress and remaining copy time in a terminal you can try vcp or a similar tool for python whose name I forgot.