Open danemacmillan opened 4 years ago
2020/09/14 12:02:58 DEBUG : Google drive root '': Destroy: 2020/09/14 12:02:58 DEBUG : Google drive root '': >Destroy: 2020/09/14 12:02:58 DEBUG : Calling host.Unmount 2020/09/14 12:02:58 DEBUG : host.Unmount failed 2020/09/14 12:02:58 DEBUG : vfs cache: cleaner exiting 2020/09/14 12:02:58 DEBUG : rclone: Version "v1.53.1" finishing with parameters ["/usr/local/bin/rclone" "cmount" "gd:" "/Users/danemacmillan/mnt/gd" "--allow-other" "--dir-cache-time=1000h" "--poll-interval=15s" "--umask=002" "--vfs-cache-mode=full" "--vfs-read-ahead=128M" "--bwlimit=0" "--stats=1s" "--fast-list" "--read-only" "--log-file=/Users/danemacmillan/tmp/rclone-mount-gd.log" "--log-level=DEBUG"]
What that sequence looks like is that something unmounted the directory externally or possibly killed the rclone processes.
Can you check system, kernel, launchd logs to see if there is any evidence of that?
Only mention of rclone in system.log
:
Sep 13 23:57:57 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Service did not exit 5 seconds after SIGTERM. Sending SIGKILL.
Sep 13 23:58:00 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Service did not exit 3 seconds after SIGKILL.
Sep 13 23:58:00 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Abandoning service instance. This may lead to deadlocks.
Some diagnostic reports (found via Console.app
)
Date/Time: 2020-09-13 23:48:58 -0400
End time: 2020-09-13 23:51:23 -0400
OS Version: Mac OS X 10.15.6 (Build 19G2021)
Architecture: x86_64h
Report Version: 29
Incident Identifier: FBD872DD-EB9B-4CC3-BA3C-8D0A2632EAB6
Data Source: Microstackshots
Shared Cache: 0x3b1f000 9D0F03EE-1E01-3875-BD76-54E1DF47DBE3
Command: rclone
Path: /usr/local/bin/rclone
Version: ??? (???)
PID: 11931
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 145 seconds (310 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 145s
Duration: 145.20s
Duration Sampled: 90.38s
Steps: 8
Hardware model: MacBookPro11,3
Active cpus: 8
Fan speed: 2159 rpm
Heaviest stack for the target process:
7 thread_start + 15 (libsystem_pthread.dylib + 7051) [0x7fff6aed8b8b]
7 _pthread_start + 148 (libsystem_pthread.dylib + 24841) [0x7fff6aedd109]
6 ??? (libosxfuse.2.dylib + 49002) [0x1146cf6a]
6 fuse_session_process_buf + 22 (libosxfuse.2.dylib + 68722) [0x11471c72]
6 ??? (libosxfuse.2.dylib + 62451) [0x114703f3]
6 ??? (libosxfuse.2.dylib + 54644) [0x1146e574]
6 ??? (libosxfuse.2.dylib + 41313) [0x1146b161]
6 fuse_fs_read_buf + 171 (libosxfuse.2.dylib + 28489) [0x11467f49]
6 ??? (libosxfuse.2.dylib + 92911) [0x11477aef]
6 fuse_fs_read_buf + 327 (libosxfuse.2.dylib + 28645) [0x11467fe5]
6 go_hostRead + 89 (rclone + 19114297) [0x523a939]
6 crosscall2 + 55 (rclone + 1220407) [0x4129f37]
6 _cgoexp_e4f666a8d790_go_hostRead + 71 (rclone + 17710311) [0x50e3ce7]
6 runtime.cgocallback + 62 (rclone + 469470) [0x40729de]
4 runtime.cgocallback_gofunc + 155 (rclone + 469659) [0x4072a9b]
3 runtime.cgocallbackg + 197 (rclone + 25445) [0x4006365]
3 runtime.cgocallbackg1 + 421 (rclone + 26053) [0x40065c5]
3 runtime.call64 + 62 (rclone + 463806) [0x40713be]
3 github.com/billziss-gh/cgofuse/fuse._cgoexpwrap_e4f666a8d790_go_hostRead + 83 (rclone + 17702707) [0x50e1f33]
3 github.com/billziss-gh/cgofuse/fuse.hostRead + 249 (rclone + 17679417) [0x50dc439]
3 github.com/rclone/rclone/cmd/cmount.(*FS).Read + 620 (rclone + 17984972) [0x5126dcc]
3 github.com/rclone/rclone/vfs.(*RWFileHandle).ReadAt + 197 (rclone + 17904325) [0x51132c5]
2 github.com/rclone/rclone/vfs.(*RWFileHandle)._readAt + 1197 (rclone + 17903597) [0x5112fed]
2 github.com/rclone/rclone/vfs/vfscache.(*Item).ReadAt + 208 (rclone + 17812880) [0x50fcd90]
2 github.com/rclone/rclone/vfs/vfscache.(*Item).readAt + 266 (rclone + 17813898) [0x50fd18a]
2 github.com/rclone/rclone/vfs/vfscache.(*Item)._ensure + 234 (rclone + 17810026) [0x50fc26a]
1 runtime.convT2Enoptr + 37 (rclone + 56837) [0x400de05]
Powerstats for: rclone [11931]
UUID: 530EFABB-6C4C-32C4-9271-FD73C49CF3B3
Path: /usr/local/bin/rclone
Architecture: x86_64
Footprint: 19.66 MB -> 54.22 MB (+34.55 MB)
Pageins: 2 pages
Start time: 2020-09-13 23:49:26 -0400
End time: 2020-09-13 23:50:57 -0400
Num samples: 8 (100%)
CPU Time: 0.901s
Primary state: 6 samples Non-Frontmost App, Non-Suppressed, User mode, Effective Thread QoS Utility, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 8 samples Active
Power Source: 8 samples on Battery, 0 samples on AC
7 thread_start + 15 (libsystem_pthread.dylib + 7051) [0x7fff6aed8b8b]
7 _pthread_start + 148 (libsystem_pthread.dylib + 24841) [0x7fff6aedd109]
6 ??? (libosxfuse.2.dylib + 49002) [0x1146cf6a]
6 fuse_session_process_buf + 22 (libosxfuse.2.dylib + 68722) [0x11471c72]
6 ??? (libosxfuse.2.dylib + 62451) [0x114703f3]
6 ??? (libosxfuse.2.dylib + 54644) [0x1146e574]
6 ??? (libosxfuse.2.dylib + 41313) [0x1146b161]
6 fuse_fs_read_buf + 171 (libosxfuse.2.dylib + 28489) [0x11467f49]
6 ??? (libosxfuse.2.dylib + 92911) [0x11477aef]
6 fuse_fs_read_buf + 327 (libosxfuse.2.dylib + 28645) [0x11467fe5]
6 go_hostRead + 89 (rclone + 19114297) [0x523a939]
6 crosscall2 + 55 (rclone + 1220407) [0x4129f37]
6 _cgoexp_e4f666a8d790_go_hostRead + 71 (rclone + 17710311) [0x50e3ce7]
6 runtime.cgocallback + 62 (rclone + 469470) [0x40729de]
4 runtime.cgocallback_gofunc + 155 (rclone + 469659) [0x4072a9b]
3 runtime.cgocallbackg + 197 (rclone + 25445) [0x4006365]
3 runtime.cgocallbackg1 + 421 (rclone + 26053) [0x40065c5]
3 runtime.call64 + 62 (rclone + 463806) [0x40713be]
3 github.com/billziss-gh/cgofuse/fuse._cgoexpwrap_e4f666a8d790_go_hostRead + 83 (rclone + 17702707) [0x50e1f33]
3 github.com/billziss-gh/cgofuse/fuse.hostRead + 249 (rclone + 17679417) [0x50dc439]
3 github.com/rclone/rclone/cmd/cmount.(*FS).Read + 620 (rclone + 17984972) [0x5126dcc]
3 github.com/rclone/rclone/vfs.(*RWFileHandle).ReadAt + 197 (rclone + 17904325) [0x51132c5]
2 github.com/rclone/rclone/vfs.(*RWFileHandle)._readAt + 1197 (rclone + 17903597) [0x5112fed]
2 github.com/rclone/rclone/vfs/vfscache.(*Item).ReadAt + 208 (rclone + 17812880) [0x50fcd90]
2 github.com/rclone/rclone/vfs/vfscache.(*Item).readAt + 266 (rclone + 17813898) [0x50fd18a]
2 github.com/rclone/rclone/vfs/vfscache.(*Item)._ensure + 234 (rclone + 17810026) [0x50fc26a]
1 runtime.convT2Enoptr + 56 (rclone + 56856) [0x400de18]
1 runtime.mallocgc + 297 (rclone + 66057) [0x4010209]
1 runtime.convT2Enoptr + 37 (rclone + 56837) [0x400de05]
1 github.com/rclone/rclone/vfs.(*RWFileHandle)._readAt + 869 (rclone + 17903269) [0x5112ea5]
1 github.com/rclone/rclone/vfs.(*RWFileHandle)._size + 51 (rclone + 17901715) [0x5112893]
1 github.com/rclone/rclone/vfs/vfscache.(*Item).GetSize + 122 (rclone + 17792602) [0x50f7e5a]
1 github.com/rclone/rclone/vfs/vfscache.(*Item)._getSize + 263 (rclone + 17792071) [0x50f7c47]
1 runtime.cgocallbackg + 165 (rclone + 25413) [0x4006345]
1 runtime.exitsyscall + 101 (rclone + 448645) [0x406d885]
1 runtime.exitsyscallfast + 197 (rclone + 278309) [0x4043f25]
1 runtime.wirep + 76 (rclone + 289324) [0x4046a2c]
1 runtime.cgocallback_gofunc + 247 (rclone + 469751) [0x4072af7]
1 runtime.dropm + 171 (rclone + 261003) [0x403fb8b]
1 runtime.unminitSignals + 141 (rclone + 331597) [0x4050f4d]
1 runtime.sigaltstack + 84 (rclone + 367444) [0x4059b54]
1 runtime.asmcgocall + 173 (rclone + 469389) [0x407298d]
1 __sigaltstack + 10 (libsystem_kernel.dylib + 14322) [0x7fff6ae1c7f2]
1 <Kernel mode>
1 runtime.cgocallback_gofunc + 55 (rclone + 469559) [0x4072a37]
1 runtime.needm + 246 (rclone + 260086) [0x403f7f6]
1 runtime.minit + 42 (rclone + 224042) [0x4036b2a]
1 runtime.minitSignalMask + 182 (rclone + 331414) [0x4050e96]
1 runtime.sigprocmask + 49 (rclone + 367345) [0x4059af1]
1 runtime.asmcgocall + 173 (rclone + 469389) [0x407298d]
1 runtime.sigprocmask_trampoline + 19 (rclone + 476563) [0x4074593]
1 __pthread_sigmask + 10 (libsystem_kernel.dylib + 25794) [0x7fff6ae1f4c2]
1 ??? (libosxfuse.2.dylib + 48819) [0x1146ceb3]
1 fuse_session_receive_buf + 26 (libosxfuse.2.dylib + 68230) [0x11471a86]
1 ??? (libosxfuse.2.dylib + 62651) [0x114704bb]
1 fuse_chan_recv + 22 (libosxfuse.2.dylib + 68202) [0x11471a6a]
1 read + 10 (libsystem_kernel.dylib + 6174) [0x7fff6ae1a81e]
1 <Kernel mode>
1 github.com/rclone/rclone/fs/asyncreader.(*buffer).read + 88 (rclone + 4511832) [0x444d858]
1 github.com/rclone/rclone/lib/readers.ReadFill + 119 (rclone + 4503959) [0x444b997]
1 github.com/rclone/rclone/fs/chunkedreader.(*ChunkedReader).Read + 914 (rclone + 17728914) [0x50e8592]
1 io.ReadAtLeast + 135 (rclone + 527399) [0x4080c27]
1 net/http.(*bodyEOFSignal).Read + 226 (rclone + 3605218) [0x43702e2]
1 net/http.(*body).Read + 249 (rclone + 3542105) [0x4360c59]
1 net/http.(*body).readLocked + 95 (rclone + 3542335) [0x4360d3f]
1 io.(*LimitedReader).Read + 99 (rclone + 529667) [0x4081503]
1 bufio.(*Reader).Read + 322 (rclone + 1503042) [0x416ef42]
1 net/http.(*persistConn).Read + 119 (rclone + 3587415) [0x436bd57]
1 crypto/tls.(*Conn).Read + 351 (rclone + 2658527) [0x42890df]
1 crypto/tls.(*Conn).readRecordOrCCS + 1932 (rclone + 2634892) [0x428348c]
1 crypto/tls.(*Conn).readFromUntil + 243 (rclone + 2646003) [0x4285ff3]
1 bytes.(*Buffer).ReadFrom + 177 (rclone + 1480177) [0x41695f1]
1 crypto/tls.(*atLeastReader).Read + 98 (rclone + 2645410) [0x4285da2]
1 github.com/rclone/rclone/fs/fshttp.(*timeoutConn).Read + 138 (rclone + 5643242) [0x4561bea]
1 github.com/rclone/rclone/fs/fshttp.(*timeoutConn).readOrWrite + 144 (rclone + 5643024) [0x4561b10]
1 github.com/rclone/rclone/fs/fshttp.(*timeoutConn).nudgeDeadline + 140 (rclone + 5642828) [0x4561a4c]
1 net.(*conn).SetDeadline + 137 (rclone + 1366921) [0x414db89]
1 internal/poll.setDeadlineImpl + 335 (rclone + 942671) [0x40e624f]
1 internal/poll.runtime_pollSetDeadline + 351 (rclone + 443775) [0x406c57f]
1 runtime.modtimer + 380 (rclone + 371740) [0x405ac1c]
Binary Images:
0x4000000 - 0x637ffff rclone (0) <530EFABB-6C4C-32C4-9271-FD73C49CF3B3> /usr/local/bin/rclone
0x11461000 - 0x11487fff libosxfuse.2.dylib (12.7) <E9126502-0343-3ED6-A9E5-33CA8B95CAEC> /usr/local/lib/libosxfuse.2.dylib
0x7fff6ae19000 - 0x7fff6ae45fff libsystem_kernel.dylib (6153.141.1) <2B6311E6-6240-3EF7-8C87-475B66F7452C> /usr/lib/system/libsystem_kernel.dylib
0x7fff6aed7000 - 0x7fff6aee1fff libsystem_pthread.dylib (416.100.3) <62CB1A98-0B8F-31E7-A02B-A1139927F61D> /usr/lib/system/libsystem_pthread.dylib
And
Date/Time: 2020-09-13 23:58:30 -0400
End time: 2020-09-13 23:59:34 -0400
OS Version: Mac OS X 10.15.6 (Build 19G2021)
Architecture: x86_64h
Report Version: 29
Incident Identifier: DD73F020-C47E-489D-A0F0-159AE6B34B09
Data Source: Microstackshots
Shared Cache: 0x3b1f000 9D0F03EE-1E01-3875-BD76-54E1DF47DBE3
Command: rclone
Path: /usr/local/bin/rclone
Version: ??? (???)
PID: 16140
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 64 seconds (704 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 64s
Duration: 63.91s
Duration Sampled: 11.65s
Steps: 6
Hardware model: MacBookPro11,3
Active cpus: 8
Fan speed: 2158 rpm
Heaviest stack for the target process:
2 thread_start + 15 (libsystem_pthread.dylib + 7051) [0x7fff6aed8b8b]
2 _pthread_start + 148 (libsystem_pthread.dylib + 24841) [0x7fff6aedd109]
1 ??? (libosxfuse.2.dylib + 48819) [0xe59eeb3]
1 fuse_session_receive_buf + 26 (libosxfuse.2.dylib + 68230) [0xe5a3a86]
1 ??? (libosxfuse.2.dylib + 62651) [0xe5a24bb]
1 fuse_chan_recv + 22 (libosxfuse.2.dylib + 68202) [0xe5a3a6a]
1 read + 10 (libsystem_kernel.dylib + 6174) [0x7fff6ae1a81e]
Powerstats for: rclone [16140]
UUID: 530EFABB-6C4C-32C4-9271-FD73C49CF3B3
Path: /usr/local/bin/rclone
Architecture: x86_64
Footprint: 51.13 MB -> 59.55 MB (+8628 KB)
Start time: 2020-09-13 23:59:21 -0400
End time: 2020-09-13 23:59:33 -0400
Num samples: 6 (100%)
Primary state: 4 samples Non-Frontmost App, Non-Suppressed, Kernel mode, Effective Thread QoS Utility, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 6 samples Active
Power Source: 6 samples on Battery, 0 samples on AC
2 thread_start + 15 (libsystem_pthread.dylib + 7051) [0x7fff6aed8b8b]
2 _pthread_start + 148 (libsystem_pthread.dylib + 24841) [0x7fff6aedd109]
1 ??? (libosxfuse.2.dylib + 48819) [0xe59eeb3]
1 fuse_session_receive_buf + 26 (libosxfuse.2.dylib + 68230) [0xe5a3a86]
1 ??? (libosxfuse.2.dylib + 62651) [0xe5a24bb]
1 fuse_chan_recv + 22 (libosxfuse.2.dylib + 68202) [0xe5a3a6a]
1 read + 10 (libsystem_kernel.dylib + 6174) [0x7fff6ae1a81e]
1 crosscall_amd64 + 12 (rclone + 19119516) [0x523bd9c]
1 runtime.mstart + 102 (rclone + 256966) [0x403ebc6]
1 runtime.mstart1 + 200 (rclone + 257192) [0x403eca8]
1 runtime.sysmon + 1210 (rclone + 292570) [0x40476da]
1 runtime.notetsleep + 88 (rclone + 61144) [0x400eed8]
1 runtime.notetsleep_internal + 284 (rclone + 60668) [0x400ecfc]
1 <User mode>
2 runtime.park_m + 157 (rclone + 273437) [0x4042c1d]
2 runtime.schedule + 727 (rclone + 271991) [0x4042677]
1 runtime.findrunnable + 3318 (rclone + 269430) [0x4041c76]
1 runtime.netpoll + 174 (rclone + 220782) [0x4035e6e]
1 runtime.kevent + 57 (rclone + 367929) [0x4059d39]
1 runtime.asmcgocall + 173 (rclone + 469389) [0x407298d]
1 kevent + 10 (libsystem_kernel.dylib + 22374) [0x7fff6ae1e766]
1 runtime.findrunnable + 1925 (rclone + 268037) [0x4041705]
1 runtime.netpoll + 174 (rclone + 220782) [0x4035e6e]
1 runtime.kevent + 57 (rclone + 367929) [0x4059d39]
1 runtime.asmcgocall + 173 (rclone + 469389) [0x407298d]
1 kevent + 10 (libsystem_kernel.dylib + 22374) [0x7fff6ae1e766]
1 read + 10 (libsystem_kernel.dylib + 6174) [0x7fff6ae1a81e]
1 github.com/rclone/rclone/fs/asyncreader.(*buffer).read + 88 (rclone + 4511832) [0x444d858]
1 github.com/rclone/rclone/lib/readers.ReadFill + 119 (rclone + 4503959) [0x444b997]
1 github.com/rclone/rclone/fs/chunkedreader.(*ChunkedReader).Read + 914 (rclone + 17728914) [0x50e8592]
1 io.ReadAtLeast + 135 (rclone + 527399) [0x4080c27]
1 net/http.(*body).Read + 9 (rclone + 3541865) [0x4360b69]
1 <User mode, Requested Thread QoS Utility>
Binary Images:
0x4000000 - 0x637ffff rclone (0) <530EFABB-6C4C-32C4-9271-FD73C49CF3B3> /usr/local/bin/rclone
0xe593000 - 0xe5b9fff libosxfuse.2.dylib (12.7) <E9126502-0343-3ED6-A9E5-33CA8B95CAEC> /usr/local/lib/libosxfuse.2.dylib
0x7fff6ae19000 - 0x7fff6ae45fff libsystem_kernel.dylib (6153.141.1) <2B6311E6-6240-3EF7-8C87-475B66F7452C> /usr/lib/system/libsystem_kernel.dylib
0x7fff6aed7000 - 0x7fff6aee1fff libsystem_pthread.dylib (416.100.3) <62CB1A98-0B8F-31E7-A02B-A1139927F61D> /usr/lib/system/libsystem_pthread.dylib
Not sure if any of this is useful. Let me know if there's more I can dig up.
Edit:
By the way, should I remove OSXFuse now that rclone is using cmount? It's still installed, and I suspect it needs to remain installed, but I figured I'd ask.
This looks like the OS throttled rclone
Wakeups: 45001 wakeups over the last 64 seconds (704 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds Wakeups limit: 45000
And this looks like launchd killed rclone
Sep 13 23:57:57 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Service did not exit 5 seconds after SIGTERM. Sending SIGKILL. Sep 13 23:58:00 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Service did not exit 3 seconds after SIGKILL. Sep 13 23:58:00 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[14756]): Abandoning service instance. This may lead to deadlocks.
It would be worthwhile digging into those a bit.
I know very little about macOS so I'm not sure exactly where to look next.
By the way, should I remove OSXFuse now that rclone is using cmount? It's still installed, and I suspect it needs to remain installed, but I figured I'd ask
Rclone is still using osxfuse so you need it installed.
I'd like to add that a new kind of diagnostic log was produced recently, which corresponds with the issue of the contents of the directory not showing anymore, despite the mount
command stating that the drive is mounted.
Date/Time: 2020-09-29 04:56:57 -0400
End time: 2020-09-29 04:59:51 -0400
OS Version: Mac OS X 10.15.6 (Build 19G2021)
Architecture: x86_64h
Report Version: 29
Incident Identifier: 1B143B14-09B6-4649-8FFD-341B9099EBC5
Data Source: Microstackshots
Shared Cache: 0x47e9000 9D0F03EE-1E01-3875-BD76-54E1DF47DBE3
Command: rclone
Path: /usr/local/bin/rclone
Version: ??? (???)
PID: 8011
Event: cpu usage
Action taken: none
CPU: 90 seconds cpu time over 173 seconds (52% cpu average), exceeding limit of 50% cpu over 180 seconds
CPU limit: 90s
Limit duration: 180s
CPU used: 90s
CPU duration: 173s
Duration: 173.48s
Duration Sampled: 167.17s
Steps: 59
Hardware model: MacBookPro11,3
Active cpus: 8
Fan speed: 2671 rpm
Heaviest stack for the target process:
21 github.com/rclone/rclone/vfs/vfscache.(*Cache).clean + 300 (rclone + 17781164) [0x50f51ac]
21 github.com/rclone/rclone/vfs/vfscache.(*Cache).purgeOverQuota + 80 (rclone + 17780080) [0x50f4d70]
14 github.com/rclone/rclone/vfs/vfscache.(*Cache).updateUsed + 261 (rclone + 17779845) [0x50f4c85]
3 runtime.mapiternext + 1028 (rclone + 79588) [0x40136e4]
Powerstats for: rclone [8011] [unique pid 13830787]
UUID: 530EFABB-6C4C-32C4-9271-FD73C49CF3B3
Path: /usr/local/bin/rclone
Architecture: x86_64
Footprint: 115.23 MB
Start time: 2020-09-29 04:57:03 -0400
End time: 2020-09-29 04:59:50 -0400
Num samples: 59 (100%)
CPU Time: 135.800s
Primary state: 59 samples Non-Frontmost App, Non-Suppressed, User mode, Effective Thread QoS Utility, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 59 samples Idle, 0 samples Active
Power Source: 0 samples on Battery, 59 samples on AC
21 github.com/rclone/rclone/vfs/vfscache.(*Cache).clean + 300 (rclone + 17781164) [0x50f51ac]
21 github.com/rclone/rclone/vfs/vfscache.(*Cache).purgeOverQuota + 80 (rclone + 17780080) [0x50f4d70]
14 github.com/rclone/rclone/vfs/vfscache.(*Cache).updateUsed + 261 (rclone + 17779845) [0x50f4c85]
3 runtime.mapiternext + 1028 (rclone + 79588) [0x40136e4]
2 runtime.mapiternext + 130 (rclone + 78690) [0x4013362]
2 runtime.mapiternext + 94 (rclone + 78654) [0x401333e]
1 runtime.mapiternext + 1294 (rclone + 79854) [0x40137ee]
1 runtime.mapiternext + 1241 (rclone + 79801) [0x40137b9]
1 runtime.mapiternext + 1221 (rclone + 79781) [0x40137a5]
1 runtime.mapiternext + 1010 (rclone + 79570) [0x40136d2]
1 runtime.mapiternext + 125 (rclone + 78685) [0x401335d]
1 runtime.mapiternext + 115 (rclone + 78675) [0x4013353]
1 runtime.mapiternext + 107 (rclone + 78667) [0x401334b]
7 github.com/rclone/rclone/vfs/vfscache.(*Cache).updateUsed + 210 (rclone + 17779794) [0x50f4c52]
7 runtime.mapiterinit + 453 (rclone + 78309) [0x40131e5]
2 runtime.mapiternext + 130 (rclone + 78690) [0x4013362]
2 runtime.mapiternext + 110 (rclone + 78670) [0x401334e]
1 runtime.mapiternext + 1244 (rclone + 79804) [0x40137bc]
1 runtime.mapiternext + 1036 (rclone + 79596) [0x40136ec]
1 runtime.mapiternext + 94 (rclone + 78654) [0x401333e]
20 github.com/rclone/rclone/vfs/vfscache.(*Cache).clean + 380 (rclone + 17781244) [0x50f51fc]
13 github.com/rclone/rclone/vfs/vfscache.(*Cache).updateUsed + 261 (rclone + 17779845) [0x50f4c85]
3 runtime.mapiternext + 1028 (rclone + 79588) [0x40136e4]
2 runtime.mapiternext + 1041 (rclone + 79601) [0x40136f1]
2 runtime.mapiternext + 110 (rclone + 78670) [0x401334e]
1 runtime.mapiternext + 1221 (rclone + 79781) [0x40137a5]
1 runtime.mapiternext + 1077 (rclone + 79637) [0x4013715]
1 runtime.mapiternext + 1036 (rclone + 79596) [0x40136ec]
1 runtime.mapiternext + 125 (rclone + 78685) [0x401335d]
1 runtime.mapiternext + 121 (rclone + 78681) [0x4013359]
1 runtime.mapiternext + 115 (rclone + 78675) [0x4013353]
7 github.com/rclone/rclone/vfs/vfscache.(*Cache).updateUsed + 210 (rclone + 17779794) [0x50f4c52]
7 runtime.mapiterinit + 453 (rclone + 78309) [0x40131e5]
3 runtime.mapiternext + 115 (rclone + 78675) [0x4013353]
2 runtime.mapiternext + 1041 (rclone + 79601) [0x40136f1]
1 runtime.mapiternext + 1077 (rclone + 79637) [0x4013715]
1 runtime.mapiternext + 94 (rclone + 78654) [0x401333e]
18 github.com/rclone/rclone/vfs/vfscache.(*Cache).clean + 270 (rclone + 17781134) [0x50f518e]
12 github.com/rclone/rclone/vfs/vfscache.(*Cache).purgeOld + 275 (rclone + 17778867) [0x50f48b3]
3 runtime.mapiternext + 1028 (rclone + 79588) [0x40136e4]
3 runtime.mapiternext + 115 (rclone + 78675) [0x4013353]
2 runtime.mapiternext + 94 (rclone + 78654) [0x401333e]
1 runtime.mapiternext + 1041 (rclone + 79601) [0x40136f1]
1 runtime.mapiternext + 1036 (rclone + 79596) [0x40136ec]
1 runtime.mapiternext + 118 (rclone + 78678) [0x4013356]
1 runtime.mapiternext + 110 (rclone + 78670) [0x401334e]
4 github.com/rclone/rclone/vfs/vfscache.(*Cache).purgeOld + 210 (rclone + 17778802) [0x50f4872]
4 runtime.mapiterinit + 453 (rclone + 78309) [0x40131e5]
1 runtime.mapiternext + 1244 (rclone + 79804) [0x40137bc]
1 runtime.mapiternext + 1209 (rclone + 79769) [0x4013799]
1 runtime.mapiternext + 1028 (rclone + 79588) [0x40136e4]
1 runtime.mapiternext + 130 (rclone + 78690) [0x4013362]
2 github.com/rclone/rclone/vfs/vfscache.(*Cache).purgeOld + 261 (rclone + 17778853) [0x50f48a5]
1 github.com/rclone/rclone/vfs/vfscache.(*Cache).removeNotInUse + 652 (rclone + 17775692) [0x50f3c4c]
1 runtime.convTstring + 105 (rclone + 56585) [0x400dd09]
1 github.com/rclone/rclone/vfs/vfscache.(*Cache).removeNotInUse + 89 (rclone + 17775129) [0x50f3a19]
1 github.com/rclone/rclone/vfs/vfscache.(*Item).RemoveNotInUse + 172 (rclone + 17805772) [0x50fb1cc]
1 sync.(*Mutex).Unlock + 43 (rclone + 519563) [0x407ed8b]
Binary Images:
0x4000000 - 0x637ffff rclone (0) <530EFABB-6C4C-32C4-9271-FD73C49CF3B3> /usr/local/bin/rclone
And this other one:
Date/Time: 2020-09-28 04:44:56 -0400
End time: 2020-09-28 04:45:49 -0400
OS Version: Mac OS X 10.15.6 (Build 19G2021)
Architecture: x86_64h
Report Version: 29
Incident Identifier: EEA93B81-1A87-4F9C-90E8-6B212B086D0B
Data Source: Microstackshots
Shared Cache: 0x47e9000 9D0F03EE-1E01-3875-BD76-54E1DF47DBE3
Command: rclone
Path: /usr/local/bin/rclone
Version: ??? (???)
PID: 83574
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 53 seconds (852 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 53s
Duration: 52.79s
Duration Sampled: 0.00s
Steps: 1
Hardware model: MacBookPro11,3
Active cpus: 8
Fan speed: 4314 rpm
Heaviest stack for the target process:
1 github.com/rclone/rclone/backend/drive.(*Fs).list + 2186 (rclone + 9412522) [0x48f9faa]
1 github.com/rclone/rclone/backend/drive.(*Fs).listRRunner.func1 + 728 (rclone + 9519480) [0x4914178]
1 github.com/rclone/rclone/backend/drive.(*Fs).ListR.func2 + 293 (rclone + 9520837) [0x49146c5]
1 github.com/rclone/rclone/fs/walk.(*ListRHelper).Add + 184 (rclone + 6966392) [0x46a4c78]
1 github.com/rclone/rclone/fs/walk.walkRDirTree.func1 + 1458 (rclone + 6972818) [0x46a6592]
1 github.com/rclone/rclone/fs/filter.(*Filter).IncludeObject + 364 (rclone + 6940588) [0x469e7ac]
1 github.com/rclone/rclone/fs/filter.(*Filter).Include + 275 (rclone + 6939827) [0x469e4b3]
1 github.com/rclone/rclone/fs/filter.(*Filter).includeRemote + 141 (rclone + 6938573) [0x469dfcd]
1 github.com/rclone/rclone/fs/filter.(*rule).Match + 136 (rclone + 6930440) [0x469c008]
1 regexp.(*Regexp).doExecute + 1101 (rclone + 1859181) [0x41c5e6d]
1 regexp.(*Regexp).backtrack + 943 (rclone + 1849007) [0x41c36af]
1 regexp.(*Regexp).tryBacktrack + 269 (rclone + 1844781) [0x41c262d]
Powerstats for: rclone [83574] [unique pid 19275874]
UUID: 530EFABB-6C4C-32C4-9271-FD73C49CF3B3
Path: /usr/local/bin/rclone
Architecture: x86_64
Footprint: 17.82 MB
Start time: 2020-09-28 04:44:58 -0400
End time: 2020-09-28 04:44:58 -0400
Num samples: 1 (100%)
Primary state: 1 samples Non-Frontmost App, Non-Suppressed, User mode, Effective Thread QoS Default, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 1 samples Idle, 0 samples Active
Power Source: 0 samples on Battery, 1 samples on AC
1 github.com/rclone/rclone/backend/drive.(*Fs).list + 2186 (rclone + 9412522) [0x48f9faa]
1 github.com/rclone/rclone/backend/drive.(*Fs).listRRunner.func1 + 728 (rclone + 9519480) [0x4914178]
1 github.com/rclone/rclone/backend/drive.(*Fs).ListR.func2 + 293 (rclone + 9520837) [0x49146c5]
1 github.com/rclone/rclone/fs/walk.(*ListRHelper).Add + 184 (rclone + 6966392) [0x46a4c78]
1 github.com/rclone/rclone/fs/walk.walkRDirTree.func1 + 1458 (rclone + 6972818) [0x46a6592]
1 github.com/rclone/rclone/fs/filter.(*Filter).IncludeObject + 364 (rclone + 6940588) [0x469e7ac]
1 github.com/rclone/rclone/fs/filter.(*Filter).Include + 275 (rclone + 6939827) [0x469e4b3]
1 github.com/rclone/rclone/fs/filter.(*Filter).includeRemote + 141 (rclone + 6938573) [0x469dfcd]
1 github.com/rclone/rclone/fs/filter.(*rule).Match + 136 (rclone + 6930440) [0x469c008]
1 regexp.(*Regexp).doExecute + 1101 (rclone + 1859181) [0x41c5e6d]
1 regexp.(*Regexp).backtrack + 943 (rclone + 1849007) [0x41c36af]
1 regexp.(*Regexp).tryBacktrack + 269 (rclone + 1844781) [0x41c262d]
Binary Images:
0x4000000 - 0x637ffff rclone (0) <530EFABB-6C4C-32C4-9271-FD73C49CF3B3> /usr/local/bin/rclone
As far as I can see this is the kernel killing rclone for doing too many wakeups. However rclone only wakes up in response to the kernel, so I don't think I can fix rclone.
Wakeups: 45001 wakeups over the last 53 seconds (852 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
If the limit is 150 wakeups per second then you could try adding --tps-limit 10
(say) to slow rclone down to see if that works. Then you can try increasing that number until the problem comes back.
I'll give that a shot.
Is there a way to determine how many requests are being made without setting this? For example, if I set it to 10
but my usage is closer to 1000
, it may take a while before I find a good value. If I know it's 1000
, then I can just try 500
for starters and work from that point.
I added --tpslimit=10
, but it still crashed, though not in the same way. There were no diagnostic logs related to wakeups, but I did see rclone output this:
2020/10/06 17:24:16 INFO : Starting HTTP transaction limiter: max 10 transactions/s with burst 1
2020/10/06 17:24:16 Fatal error: Can not open: /Users/danemacmillan/mnt/gd: open /Users/danemacmillan/mnt/gd: device not configured
I then found the following in the system.log
, repeating every ten seconds for hours:
Oct 6 17:24:16 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[40788]): Service exited with abnormal code: 1
Oct 6 17:24:16 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Oct 6 17:24:26 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone[40849]): Service exited with abnormal code: 1
Oct 6 17:24:26 macmillanator com.apple.xpc.launchd[1] (danemacmillan.etc.rclone): Service only ran for 0 seconds. Pushing respawn out by 10 seconds.
Obviously neither of these seem to indicate what went wrong.
I'm not familiar with Go development, or MacOS development, so this is really the limit of my help here, unfortunately. The only observation I can make is that this is a new issue, and not one I saw until I started using the new (temporary) cmount
argument in this latest version; I assume that if the code behind it eventually finds its way into the official mount
argument, other MacOS users will start reporting this issue as well.
I should note that I have Plex running a conservative amount of scans on this, but also note that this has been a non-issue when using the regular mount
argument. Maybe it's the cmount
argument, maybe it's full VFS, maybe it's the combination of them on MacOS: I really don't know. I'll mount without VFS and see what happens with just cmount
.
Thanks for the update. If you could confirm whether this is just rclone mount
vs rclone cmount
or vfs cache on vs off that would be very useful - thank you.
I'm a bit hampered looking at this as I don't have a mac but other rclone devs do. @darthShadow - I think you are a macOS user - have you seen this issue?
I am a macOS user but that is just for general dev work because it's quite a few years old and not capable enough to run any heavy-duty software like Plex. My productivity machine is still a Linux system and that's where I use rclone mounts with Plex etc. That said, I will still try with just a normal rclone mount (without Plex) and see if I can reproduce that.
I've been using rclone 1.53.2-DEV on MacOS 10.15.7 with mount
and --vfs-cache-mode writes
and have this issue. After some time, "Device not configured" messages appear and the directory can't be removed or opened.
Just tried again and it's happened again. I was copying a file into a rclone mount, and I was downloading a file directly into the rclone mount. ( using mount + vfs-cache-mode writes )
Although this time, the directory didn't end up with "device not configured" - I could delete the mounted directory.
Diagnostic report::
Date/Time: 2020-11-09 10:13:01 +1100
End time: 2020-11-09 10:13:40 +1100
OS Version: Mac OS X 10.15.7 (Build 19H2)
Architecture: x86_64h
Report Version: 29
Incident Identifier: 666265FC-B579-4DC4-9377-9F9AC7D872F1
Data Source: Microstackshots
Shared Cache: 0x5a0a000 DBE7BDDA-5D39-3FFB-AE79-AB5ED1CF95E2
Command: rclone
Path: /usr/local/Cellar/rclone/1.53.2/bin/rclone
Version: ??? (???)
PID: 69953
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 40 seconds (1138 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 40s
Duration: 39.56s
Duration Sampled: 19.47s
Steps: 14
Hardware model: MacBookPro14,3
Active cpus: 8
Fan speed: 5920 rpm
Heaviest stack for the target process:
9 github.com/rclone/rclone/vfs/vfscache.(*Item).Close.func2 + 71 (rclone + 17821799) [0x50ff067]
9 github.com/rclone/rclone/vfs/vfscache.(*Item).store + 146 (rclone + 17800370) [0x50f9cb2]
9 github.com/rclone/rclone/vfs/vfscache.(*Item)._store + 350 (rclone + 17799102) [0x50f97be]
9 github.com/rclone/rclone/fs/operations.Copy + 6191 (rclone + 7384367) [0x470ad2f]
9 github.com/rclone/rclone/backend/crypt.(*Fs).Put + 248 (rclone + 7286360) [0x46f2e58]
9 github.com/rclone/rclone/backend/crypt.(*Fs).put + 530 (rclone + 7284370) [0x46f2692]
9 github.com/rclone/rclone/fs.Fs.Put-fm + 168 (rclone + 7309640) [0x46f8948]
9 github.com/rclone/rclone/backend/b2.(*Fs).Put + 286 (rclone + 7082366) [0x46c117e]
9 github.com/rclone/rclone/backend/b2.(*Object).Update + 3342 (rclone + 7105422) [0x46c6b8e]
9 github.com/rclone/rclone/backend/b2.(*Fs).newLargeUpload + 2185 (rclone + 7111273) [0x46c8269]
9 github.com/rclone/rclone/backend/crypt.(*ObjectInfo).Hash + 360 (rclone + 7301608) [0x46f69e8]
9 github.com/rclone/rclone/backend/crypt.(*Fs).computeHashWithNonce + 677 (rclone + 7292165) [0x46f4505]
8 io.copyBuffer + 300 (rclone + 527020) [0x4080aac]
8 github.com/rclone/rclone/backend/crypt.(*encrypter).Read + 456 (rclone + 7265160) [0x46edb88]
6 golang.org/x/crypto/nacl/secretbox.Seal + 761 (rclone + 5666873) [0x4567839]
6 golang.org/x/crypto/salsa20/salsa.XORKeyStream + 106 (rclone + 5661450) [0x456630a]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 3346 (rclone + 5664850) [0x4567052]
Powerstats for: rclone [69953] [unique pid 368687]
UUID: 0E06542E-9F05-3D48-82D8-0377F36129A8
Path: /usr/local/Cellar/rclone/1.53.2/bin/rclone
Architecture: x86_64
Footprint: 99.74 MB
Pageins: 30 pages
Start time: 2020-11-09 10:13:19 +1100
End time: 2020-11-09 10:13:39 +1100
Num samples: 14 (100%)
CPU Time: 21.074s
Primary state: 11 samples Non-Frontmost App, Non-Suppressed, User mode, Effective Thread QoS Default, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 14 samples Active
Power Source: 0 samples on Battery, 14 samples on AC
9 github.com/rclone/rclone/vfs/vfscache.(*Item).Close.func2 + 71 (rclone + 17821799) [0x50ff067]
9 github.com/rclone/rclone/vfs/vfscache.(*Item).store + 146 (rclone + 17800370) [0x50f9cb2]
9 github.com/rclone/rclone/vfs/vfscache.(*Item)._store + 350 (rclone + 17799102) [0x50f97be]
9 github.com/rclone/rclone/fs/operations.Copy + 6191 (rclone + 7384367) [0x470ad2f]
9 github.com/rclone/rclone/backend/crypt.(*Fs).Put + 248 (rclone + 7286360) [0x46f2e58]
9 github.com/rclone/rclone/backend/crypt.(*Fs).put + 530 (rclone + 7284370) [0x46f2692]
9 github.com/rclone/rclone/fs.Fs.Put-fm + 168 (rclone + 7309640) [0x46f8948]
9 github.com/rclone/rclone/backend/b2.(*Fs).Put + 286 (rclone + 7082366) [0x46c117e]
9 github.com/rclone/rclone/backend/b2.(*Object).Update + 3342 (rclone + 7105422) [0x46c6b8e]
9 github.com/rclone/rclone/backend/b2.(*Fs).newLargeUpload + 2185 (rclone + 7111273) [0x46c8269]
9 github.com/rclone/rclone/backend/crypt.(*ObjectInfo).Hash + 360 (rclone + 7301608) [0x46f69e8]
9 github.com/rclone/rclone/backend/crypt.(*Fs).computeHashWithNonce + 677 (rclone + 7292165) [0x46f4505]
8 io.copyBuffer + 300 (rclone + 527020) [0x4080aac]
8 github.com/rclone/rclone/backend/crypt.(*encrypter).Read + 456 (rclone + 7265160) [0x46edb88]
6 golang.org/x/crypto/nacl/secretbox.Seal + 761 (rclone + 5666873) [0x4567839]
6 golang.org/x/crypto/salsa20/salsa.XORKeyStream + 106 (rclone + 5661450) [0x456630a]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 3346 (rclone + 5664850) [0x4567052]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 1728 (rclone + 5663232) [0x4566a00]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 1572 (rclone + 5663076) [0x4566964]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 1508 (rclone + 5663012) [0x4566924]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 1101 (rclone + 5662605) [0x456678d]
1 golang.org/x/crypto/salsa20/salsa.salsa2020XORKeyStream + 872 (rclone + 5662376) [0x45666a8]
2 golang.org/x/crypto/nacl/secretbox.Seal + 829 (rclone + 5666941) [0x456787d]
2 golang.org/x/crypto/poly1305.Sum + 187 (rclone + 5659035) [0x456599b]
2 golang.org/x/crypto/poly1305.update + 156 (rclone + 5660572) [0x4565f9c]
1 io.copyBuffer + 518 (rclone + 527238) [0x4080b86]
1 github.com/rclone/rclone/fs/hash.(*MultiHasher).Write + 45 (rclone + 1605613) [0x4187fed]
3 read + 10 (libsystem_kernel.dylib + 6174) [0x7fff6cd3881e]
3 <Kernel mode>
1 <missing stack>
1 net/http.(*Transport).dialConn + 6843 (rclone + 3583323) [0x436ad5b]
1 net/http.(*Transport).dial + 509 (rclone + 3566877) [0x4366d1d]
1 github.com/rclone/rclone/fs/fshttp.NewTransportCustom.func1 + 115 (rclone + 5650931) [0x45639f3]
1 github.com/rclone/rclone/fs/fshttp.dialContextTimeout + 229 (rclone + 5642981) [0x4561ae5]
1 net.(*Dialer).DialContext + 1765 (rclone + 1238693) [0x412e6a5]
1 net.(*sysDialer).dialSerial + 338 (rclone + 1241906) [0x412f332]
1 net.(*sysDialer).dialSingle + 1509 (rclone + 1245029) [0x412ff65]
1 net.(*sysDialer).dialTCP + 215 (rclone + 1404567) [0x4156e97]
1 net.(*sysDialer).doDialTCP + 197 (rclone + 1404837) [0x4156fa5]
1 net.internetSocket + 325 (rclone + 1338597) [0x4146ce5]
1 net.socket + 90 (rclone + 1379642) [0x4150d3a]
1 net.sysSocket + 100 (rclone + 1392740) [0x4154064]
1 syscall.socket + 9 (rclone + 821897) [0x40c8a89]
Binary Images:
0x4000000 - 0x637ffff rclone (0) <0E06542E-9F05-3D48-82D8-0377F36129A8> /usr/local/Cellar/rclone/1.53.2/bin/rclone
0x7fff6cd37000 - 0x7fff6cd63fff libsystem_kernel.dylib (6153.141.2) <A576A1CF-7726-3146-B04B-A26E1CDB9757> /usr/lib/system/libsystem_kernel.dylib
Wakeups: 45001 wakeups over the last 40 seconds (1138 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
This looks like the fundamental problem - macOS is saying that rclone mount
is waking up too much.
However rclone mount
will wake up whenever it is used... so I'm guessing you are using it a lot
I haven't found a way of setting this limit or exempting rclone from it - that would be the most useful thing. Can you?
Another thing we could do is slow rclone down to make each wakeup take 1/150 s = 6.6mS. This seems like a bad idea but it would probably fix the problem.
However rclone mount will wake up whenever it is used... so I'm guessing you are using it a lot
This was a mount that Plex was using, so probably quite a lot, though it was configured to only perform its operations manually, so it was at least controlled and not running at all hours.
Can you see if rclone is in "App Nap" when sitting idle: https://developer.apple.com/library/archive/documentation/Performance/Conceptual/power_efficiency_guidelines_osx/AppNap.html
Can you try also preventing App Nap like this: https://www.doornik.com/appnap.html
Can you try --daemon-timeout 9m
to see if that makes a difference?
@ncw About a week ago I upgraded rclone
to v1.54.0
from homebrew on MacOS, and upon trying to mount, some output along the lines of "to use the mount command on MacOS you need to download a binary directly from our site at https://rclone.org/downloads/ ." I uninstalled the version provided by homebrew, and downloaded and installed the version from the site directly. I ran the following command, which is essentially the same as the command I used to describe the problem in this ticket originally:
rclone mount gd: /Users/danemacmillan/mnt/gd --allow-other --dir-cache-time=1000h --poll-interval=15s --umask=002 --vfs-cache-mode=full --vfs-read-ahead=1G --vfs-cache-max-age=12h --vfs-cache-max-size=75G --no-checksum --no-modtime --read-only --fast-list --bwlimit=0 --tpslimit=10 --stats=1s --log-level=INFO --log-file=/Users/danemacmillan/tmp/rclone-mount-gd.log
It never once unmounted or provided the errors I originally described. I must have kept the mount active about five days before I had to reboot the computer for unrelated reasons a couple days ago. The fact it stayed mounted for five days is significant, because it never had gone that long since describing the original problem after upgrading to v1.53.1
in September. I don't know what changed between that version and the current one, and I don't know what's different about the version provided by homebrew and the one directly from the site, but the mount appears pretty stable. That's promising. Was there anything done around this issue related to fuse mounts on MacOS, because it's the first time that running rclone mount
has stated that its homebrew version is not sufficient for fuse mounts.
I have not tried your latest option, but I will. I'll try to report back by end of next week.
Edit:
By the way, I had updated and tested again recently so I could test "App Nap," but I could never actually see it in that state because it never happened. I did not downgrade to the old homebrew version that I used when I opened the ticket, so I couldn't say if it ever got into that state; given that the latest version downloaded directly from the site worked without issues, I didn't put additional time into answering the question.
The fact it stayed mounted for five days is significant, because it never had gone that long since describing the original problem after upgrading to
v1.53.1
in September. I don't know what changed between that version and the current one, and I don't know what's different about the version provided by homebrew and the one directly from the site, but the mount appears pretty stable. That's promising. Was there anything done around this issue related to fuse mounts on MacOS, because it's the first time that runningrclone mount
has stated that its homebrew version is not sufficient for fuse mounts.
The major cjange is that rclone cmount
became the default rclone mount
. However if you were already using rclone cmount
then I'm not sure what change would have made the difference which is a bit unsatisfactory!
Did you upgrade osxfuse/macfuse? That might be a change?
Did you upgrade osxfuse/macfuse? That might be a change?
I actually upgraded from a 2014 MBP to a 2018 MBP in January, so the latest version of osxfuse almost certainly changed as it was a new install. The version I had successful mounts (noted in my last comment) was 3.11.2
, and when I opened the ticket it was a version prior to that release, though I couldn't say which one. I just upgraded to 4.0.5
at the time of this comment, but will only have time to test on the weekend.
I'm not experiencing the issue I originally described anymore. The mount has been running all week without issue. While it's unsatisfactory, I'd be willing to pin this on a imperfect combination of versions localized to my environment at the time. Since using the latest version of rclone and macfuse, the problem has disappeared.
I was optimistic too soon. The problem returned.
The primary difference from the last week to the current failure is how rclone was being called. It may be nothing, or it may be everything. During the last week I had been running the command directly from the terminal:
rclone mount gd: ${HOME}/mnt/gd --allow-other --dir-cache-time=960h --poll-interval=15s --umask=002 --vfs-cache-mode=full --vfs-read-ahead=1G --vfs-cache-max-age=480h --vfs-cache-max-size=140G --cache-dir=${XDG_CACHE_HOME}/rclone --bwlimit=4M:25M --stats=1s --log-level=INFO
It was stable.
After that week-long test I figured it was time to get it mounted automatically and persist through any problems, and in the MacOS world, that's done using the launchctl
utility, combined with a plist
file located in "${HOME}/Library/Launch Agents"
:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>KeepAlive</key>
<true/>
<key>Label</key>
<string>danemacmillan.etc.rclone</string>
<key>ProgramArguments</key>
<array>
<string>/usr/local/bin/rclone</string>
<string>mount</string>
<string>gd:</string>
<string>/Users/danemacmillan/mnt/gd</string>
<string>--allow-other</string>
<string>--dir-cache-time=960h</string>
<string>--poll-interval=15s</string>
<string>--umask=002</string>
<string>--vfs-cache-mode=full</string>
<string>--vfs-read-ahead=1G</string>
<string>--vfs-cache-max-age=480h</string>
<string>--vfs-cache-max-size=140G</string>
<string>--cache-dir=/Users/danemacmillan/.cache/rclone</string>
<string>--bwlimit=4M:25M</string>
<string>--stats=1s</string>
<string>--log-level=INFO</string>
<string>--log-file=/Users/danemacmillan/tmp/rclone-mount-gd.log</string>
</array>
<key>RunAtLoad</key>
<true/>
</dict>
</plist>
Running the following command will allow rclone to mount the remote upon boot and whenever there's a problem, like if rclone was killed:
launchctl load -w ~/Library/LaunchAgents/danemacmillan.etc.rclone.plist
I have not rebooted the computer yet, but ever since rclone died from running as a launchctl
service, even just manually running the command will cause it to eventually die, and provide the same errors described in the original post.
The activity that occurred on the mount was the same before and after configuring it to launch with launchctl
.
In reviewing the Activity Monitor.app
for rclone
in the CPU processes tab, where you can see the "App Nap" status, rclone
is simply nowhere to be found; it's not running. Running the mount
command still shows the mount intact, but navigating into it and attempting to list the contents provides the error described in the original post, ls: cannot open directory '.': Device not configured
.
Based on all this, I'm leaning towards launchctl
being the problem, but I'll need more time, literally, to see how long the mount can stay operational. I'll be rebooting the computer and running the command manually to see if its restored to its prior stability, before configuring it as a MacOS service.
Here is one of the latest dumps from Console.app
's "Diagnostic Reports" section:
Date/Time: 2021-03-01 23:54:10.823 -0500
End time: 2021-03-01 23:56:02.578 -0500
OS Version: macOS 11.2.2 (Build 20D80)
Architecture: x86_64h
Report Version: 32
Incident Identifier: 2C4CA8A9-4434-48BC-B15A-30CA5D6C4E02
Data Source: Microstackshots
Shared Cache: 77B63435-1866-32DD-98A1-47090B666EFA slid base address 0x7fff20013000, slide 0x13000
Command: rclone
Path: /usr/local/bin/rclone
Version: ??? (???)
Parent: UNKNOWN [1305]
PID: 92643
Event: wakeups
Action taken: none
Wakeups: 45001 wakeups over the last 112 seconds (403 wakeups per second average), exceeding limit of 150 wakeups per second over 300 seconds
Wakeups limit: 45000
Limit duration: 300s
Wakeups caused: 45001
Wakeups duration: 112s
Duration: 111.76s
Duration Sampled: 0.00s
Steps: 1
Hardware model: MacBookPro15,1
Active cpus: 12
Boot args: chunklist-security-epoch=0 -chunklist-no-rev2-dev
Fan speed: 5930 rpm
Heaviest stack for the target process:
1 thread_start + 15 (libsystem_pthread.dylib + 9339) [0x7fff2032047b]
1 _pthread_start + 224 (libsystem_pthread.dylib + 26960) [0x7fff20324950]
1 ??? (libfuse.2.dylib + 76007) [0x107128e7]
1 ??? (libfuse.2.dylib + 89560) [0x10715dd8]
1 read + 10 (libsystem_kernel.dylib + 6302) [0x7fff202f089e]
Powerstats for: rclone [92643] [unique pid 390880]
UUID: 0DE4E3D1-9F77-3FDF-9B0A-7052C05C5C10
Path: /usr/local/bin/rclone
Architecture: x86_64
Parent: UNKNOWN [1305]
UID: 501
Footprint: 30.16 MB
Start time: 2021-03-01 23:55:58.200 -0500
End time: 2021-03-01 23:55:58.200 -0500
Num samples: 1 (100%)
Primary state: 1 samples Non-Frontmost App, Non-Suppressed, Kernel mode, Effective Thread QoS Default, Requested Thread QoS Default, Override Thread QoS Unspecified
User Activity: 0 samples Idle, 1 samples Active
Power Source: 0 samples on Battery, 1 samples on AC
1 thread_start + 15 (libsystem_pthread.dylib + 9339) [0x7fff2032047b]
1 _pthread_start + 224 (libsystem_pthread.dylib + 26960) [0x7fff20324950]
1 ??? (libfuse.2.dylib + 76007) [0x107128e7]
1 ??? (libfuse.2.dylib + 89560) [0x10715dd8]
1 read + 10 (libsystem_kernel.dylib + 6302) [0x7fff202f089e]
Binary Images:
0x4000000 - 0x675bfff rclone (0) <0DE4E3D1-9F77-3FDF-9B0A-7052C05C5C10> /usr/local/bin/rclone
0x10700000 - 0x1073bfff libfuse.2.dylib (0) <294F83F2-4E04-3BC6-853F-D67B48BA47F4> /usr/local/lib/libfuse.2.dylib
0x7fff202ef000 - 0x7fff2031dfff libsystem_kernel.dylib (7195.81.3) <AB413518-ECDE-3F04-A61C-278D3CF43076> /usr/lib/system/libsystem_kernel.dylib
0x7fff2031e000 - 0x7fff20329fff libsystem_pthread.dylib (454.80.2) <B989DF6C-ADFE-3AF9-9C91-07D2521F9E47> /usr/lib/system/libsystem_pthread.dylib
What is the problem you are having with rclone?
I started using the latest version of
rclone
last night to test both the new VFS--vfs-cache-mode=full
feature and the newcmount
command for MacOS. In testing I noticed that after a bit of usage, the contents of the mounted directory disappears. The content is still safely in the cloud storage service, but there's just nothing to see when listing the directory contents. The mount still appears in the list of mounted locations when themount
command is run, and the--stats
output continues to update according to the1s
interval I configured. Moreover, I'm unable toumount [path]
the path for the dead mount, but I can re-issue therclone cmount
command and it works, but the listing of mounts visible when issuing themount
command shows the location mounted multiple times (https://i.imgur.com/0VzE3O9.png). The mount does work, though, for an indeterminate amount of time until it silently fails again, which is the reason for documenting this issue in GitHub.I've just remounted with
--log-level=DEBUG
, so hopefully the next time this fails, actionable log data will be available, and I'll update this ticket.What is your rclone version (output from
rclone version
)rclone v1.53.1
Which OS you are using and how many bits (eg Windows 7, 64 bit)
MacOS Catalina 10.15.6 (19G2021). 64bit.
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg
rclone copy /tmp remote:tmp
)rclone cmount gd: /Users/danemacmillan/mnt/gd --allow-other --dir-cache-time=1000h --poll-interval=15s --umask=002 --vfs-cache-mode=full --bwlimit=0 --stats=1s --fast-list --read-only --log-file=/Users/danemacmillan/tmp/rclone-mount-gd.log --log-level=INFO
A log from the command with the
-vv
flag (eg output fromrclone -vv copy /tmp remote:tmp
)I'm remounting now with full verbosity, and after some time the issue will probably occur, as there are no errors anywhere with the above command.
Update 1
The issue just happened after a few minutes, which more or less corresponds with my experience so far. The mount only works for a few minutes and then it stops working. Here's the
DEBUG
level log output that was generated the moment it happened:Note that the attempts to re-mount it immediately after it dies is the configured MacOS
launchd
service doing its job. For reference, that service file looks like this:Mounting without the service will do the same thing: in other words, there's no difference whether the command is run manually or as a service.
This is what a directory listing looks like after the mount fails:
Here are the logs after mounting it manually after it failed: