Open msh100 opened 8 years ago
I experienced this for 3 consecutive days and have reverted to a commit I think I was on before, https://github.com/yadayada/acd_cli/commit/8997b8c3697d1f7b2b31f3c0bb4150004eb1e321
I was about to post the same issue. I'm not sure yet of what's going on, but I randomly need to unmount and mount again my acd folder.
It seems to be something 'locked'. All processes are waiting: ie the df -h
or ls /path/to/acd/mount
are taking a very long time (between a few minutes and hours).
@msh100 are you experiencing the same thing?
@maximegaillard no - it would just not work (bad address error iirc)
same problem here, bad address error. After remounting everything works again for a few hours :( how can I revert to the commit you're talking about @msh100 ?
@strickes I did the following:
pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git@8997b8c3697d1f7b2b31f3c0bb4150004eb1e321
As I know 8997b8c is a good working commit for me.
thanks, it seems to work again :) I think the write speed is not the same as with the newest commit, but it's stable again.
spoke too soon, after an hour of being mounted: root@linux01:/mnt/acd# ll ls: reading directory .: Bad address total 0
Yes same here it appears
16-03-07 15:05:04.329 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.329 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.329 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.329 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.330 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.330 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.330 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.330 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.331 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.331 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.331 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.331 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.332 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.332 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:04.333 [DEBUG] [acdcli.acd_fuse] - -> getattr /$DIRECTORY (None,)
16-03-07 15:05:04.333 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-07 15:05:11.467 [DEBUG] [acdcli.acd_fuse] - -> destroy / ()
16-03-07 15:05:11.468 [DEBUG] [acdcli.acd_fuse] - <- destroy None
Same issue as latest commit, I have absolutely no idea how to proceed.
@yadayada is there anything we can do now to improve logging for this? I am fairly sure that the logs I have already handed over are quite unhelpful.
I experience the same problems as Maximegaillard. Ls takes very long and in Cyberduck I receive timeout errors. I am using acd_cli on two Debian systems one works great so far and the other produces these occasional errors time by time, but at least once a day. How do I find out which commit I use on the machine that works great so far?
is there anything we can do now to improve logging for this?
Are these excerpts from the log file? If so, could you please mount in debug+foreground mode; you should get the print_exc
calls output then.
Sure, will do. Will update you asap.
I am experiencing this issue too. Only updated today and didn't notice this issue. Has anyone found a stable commit?
I am still monitoring this and it has not happened again however has got me thinking - acd_cli
has been working flawlessly for months, and out of the blue this happens and just happens to line up with an update I did, when I revert it happens again.
My suspicion at the moment is a malformed API response from Amazon.
In the meantime I will keep logging and if there's nothing in a few days I guess this issue should be closed assuming Amazon is the cause of the issue.
Did you do anything specific to fix it? Mine seems to be happening every night
Short update: Issue of non-working fuse mount is still there and suddenly appeared on my second box as well.
acd_cli on this second box was not touched or updated since september 2015, because I never had any issues.
Is there anythign I can do to improve logs?
I have the feeling that this issue is something similar as this issue: https://github.com/yadayada/acd_cli/issues/124
I always get this output when I try to umount my acd mount again.
fusermount: failed to unmount /mount/directory: Device or resource busy
Only killing the process helps...
This just started for me today, but only on a single box.
An older box with an older acd_cli is fine :/
Can anyone experiencing this also provide logs as mentioned above? The data I have been able to get so far is not very informative.
So mount with acdcli mount --debug <mount folder>
?
eh, acd_cli --debug mount --foreground <mountpoint>
You may want to make it log somewhere all the output
I did a --debug mount and got the same messages as @msh100 spammed over and over.
I have re-pulled from the 1.3.0 master release and it seems a lot more settled - then again -i0 helped too.
Ok, here's my rsync errors:
rsync: recv_generator: failed to stat "/home/sullytheunusual/acd/Backups/Backups/Pictures/2004/E. May/PICT3502.JPG": Bad address (14) rsync: write failed on "/home/sullytheunusual/acd/Backups/Backups/Pictures/2004/A. January/PICT2947.JPG": Bad address (14) rsync error: error in file IO (code 11) at receiver.c(393) [receiver=3.1.1]
Here's a link to my log file: https://dl.dropboxusercontent.com/u/83188234/acd_cli.log
yadayada here. This is an unrelated issue with rsync's error handling.
Hi, unfortunately the problem still persists. I took some logs today. I use acd_cli in combination with EncFs. I never had problems in the past, but since 10 or 11 days it is not stable anymore.
16-03-13 14:39:41.212 [DEBUG] [acdcli.acd_fuse] - <- open 948
16-03-13 14:39:41.213 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (948,)
16-03-13 14:39:41.213 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:41.213 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (948,)
16-03-13 14:39:41.213 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:41.214 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (948,)
16-03-13 14:39:41.214 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:41.214 [DEBUG] [acdcli.acd_fuse] - -> release /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (948,)
16-03-13 14:39:41.214 [DEBUG] [acdcli.acd_fuse] - -> open /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 ('0x8000',)
16-03-13 14:39:41.215 [DEBUG] [acdcli.acd_fuse] - <- open 949
16-03-13 14:39:41.215 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (16384, 0, 949)
16-03-13 14:39:41.216 [DEBUG] [acdcli.acd_fuse] - <- release None
16-03-13 14:39:41.217 [DEBUG] [acdcli.api.content] - chunk o 0 l 524288000
16-03-13 14:39:41.217 [DEBUG] [acdcli.api.backoff_req] - Retry 0, waiting 0.595219s
16-03-13 14:39:41.813 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=0-524287999\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer \r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:39:43.285 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:39:43.705 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:39:43.714 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:39:43.715 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 0, 949)
16-03-13 14:39:43.716 [DEBUG] [acdcli.api.content] - chunk o 0 l 524288000
16-03-13 14:39:43.716 [DEBUG] [acdcli.api.backoff_req] - Retry 1, waiting 0.761135s
16-03-13 14:39:44.479 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=0-524287999\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer \r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:39:45.397 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:39:45.398 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:39:45.398 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:39:45.398 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 131072, 949)
16-03-13 14:39:45.399 [DEBUG] [acdcli.api.content] - chunk o 131072 l 524288000
16-03-13 14:39:45.399 [DEBUG] [acdcli.api.backoff_req] - Retry 2, waiting 2.737401s
16-03-13 14:39:48.139 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=131072-524419071\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|I\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:39:48.524 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:39:48.525 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:39:48.525 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:39:48.525 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 131072, 949)
16-03-13 14:39:48.526 [DEBUG] [acdcli.api.content] - chunk o 131072 l 524288000
16-03-13 14:39:48.526 [WARNING] [acdcli.api.backoff_req] - Waiting 5.306609s because of error(s).
16-03-13 14:39:48.526 [DEBUG] [acdcli.api.backoff_req] - Retry 3, waiting 5.306609s
16-03-13 14:39:53.835 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=131072-524419071\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:39:54.102 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:39:54.103 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:39:54.103 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:39:54.103 [DEBUG] [acdcli.acd_fuse] - -> getattr / (None,)
16-03-13 14:39:54.104 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457773456.433, 'st_mode': 16895, 'st_atime': 1457876394.1041048, 'st_ctime': 1442076409.886}
16-03-13 14:39:54.104 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (949,)
16-03-13 14:39:54.104 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:54.104 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (949,)
16-03-13 14:39:54.105 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:54.105 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (949,)
16-03-13 14:39:54.105 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:39:54.105 [DEBUG] [acdcli.acd_fuse] - -> release /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (949,)
16-03-13 14:39:54.106 [DEBUG] [acdcli.acd_fuse] - <- release None
16-03-13 14:39:54.555 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs (None,)
16-03-13 14:39:54.556 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457382338.328, 'st_mode': 16895, 'st_atime': 1457876394.556431, 'st_ctime': 1442085885.693}
16-03-13 14:39:54.556 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1 (None,)
16-03-13 14:39:54.557 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1454217796.617, 'st_mode': 16895, 'st_atime': 1457876394.5577571, 'st_ctime': 1454217795.477}
16-03-13 14:39:54.558 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi (None,)
16-03-13 14:39:54.559 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457421336.232, 'st_mode': 16895, 'st_atime': 1457876394.5596607, 'st_ctime': 1454217796.596}
16-03-13 14:39:54.560 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (None,)
16-03-13 14:39:54.561 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_mode': 33206, 'st_ctime': 1454223568.932, 'st_nlink': 1, 'st_mtime': 1454224808.386, 'st_atime': 1457876394.5612557, 'st_size': 1462574402}
16-03-13 14:39:54.561 [DEBUG] [acdcli.acd_fuse] - -> open /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 ('0x8000',)
16-03-13 14:39:54.562 [DEBUG] [acdcli.acd_fuse] - <- open 950
16-03-13 14:39:54.666 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 0, 950)
16-03-13 14:39:54.667 [DEBUG] [acdcli.api.content] - chunk o 0 l 524288000
16-03-13 14:39:54.668 [DEBUG] [acdcli.api.backoff_req] - Retry 4, waiting 3.738953s
16-03-13 14:39:58.411 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=0-524287999\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:39:59.254 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:39:59.255 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:39:59.255 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:39:59.255 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 131072, 950)
16-03-13 14:39:59.257 [DEBUG] [acdcli.api.content] - chunk o 131072 l 524288000
16-03-13 14:39:59.418 [WARNING] [acdcli.api.backoff_req] - Waiting 21.162848s because of error(s).
16-03-13 14:39:59.418 [DEBUG] [acdcli.api.backoff_req] - Retry 5, waiting 21.162848s
16-03-13 14:40:20.590 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=131072-524419071\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:40:21.118 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:40:21.119 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:40:21.119 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:40:21.120 [DEBUG] [acdcli.acd_fuse] - -> getattr / (None,)
16-03-13 14:40:21.121 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457773456.433, 'st_mode': 16895, 'st_atime': 1457876421.1211026, 'st_ctime': 1442076409.886}
16-03-13 14:40:21.121 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (950,)
16-03-13 14:40:21.121 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:40:21.122 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (950,)
16-03-13 14:40:21.122 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:40:21.122 [DEBUG] [acdcli.acd_fuse] - -> flush /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (950,)
16-03-13 14:40:21.122 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-13 14:40:21.122 [DEBUG] [acdcli.acd_fuse] - -> release /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (950,)
16-03-13 14:40:21.123 [DEBUG] [acdcli.acd_fuse] - <- release None
16-03-13 14:40:22.145 [DEBUG] [acdcli.acd_fuse] - -> getattr / (None,)
16-03-13 14:40:22.146 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457773456.433, 'st_mode': 16895, 'st_atime': 1457876422.1460435, 'st_ctime': 1442076409.886}
16-03-13 14:40:22.146 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs (None,)
16-03-13 14:40:22.147 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457382338.328, 'st_mode': 16895, 'st_atime': 1457876422.1471164, 'st_ctime': 1442085885.693}
16-03-13 14:40:22.147 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1 (None,)
16-03-13 14:40:22.148 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1454217796.617, 'st_mode': 16895, 'st_atime': 1457876422.1481872, 'st_ctime': 1454217795.477}
16-03-13 14:40:22.149 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi (None,)
16-03-13 14:40:22.149 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_nlink': 1, 'st_mtime': 1457421336.232, 'st_mode': 16895, 'st_atime': 1457876422.1496294, 'st_ctime': 1454217796.596}
16-03-13 14:40:22.150 [DEBUG] [acdcli.acd_fuse] - -> getattr /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (None,)
16-03-13 14:40:22.150 [DEBUG] [acdcli.acd_fuse] - <- getattr {'st_mode': 33206, 'st_ctime': 1454223568.932, 'st_nlink': 1, 'st_mtime': 1454224808.386, 'st_atime': 1457876422.1508038, 'st_size': 1462574402}
16-03-13 14:40:22.151 [DEBUG] [acdcli.acd_fuse] - -> open /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 ('0x8000',)
16-03-13 14:40:22.151 [DEBUG] [acdcli.acd_fuse] - <- open 951
16-03-13 14:40:22.151 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (16384, 0, 951)
16-03-13 14:40:22.152 [DEBUG] [acdcli.api.content] - chunk o 0 l 524288000
16-03-13 14:40:22.152 [WARNING] [acdcli.api.backoff_req] - Waiting 33.370684s because of error(s).
16-03-13 14:40:22.152 [DEBUG] [acdcli.api.backoff_req] - Retry 6, waiting 33.370684s
16-03-13 14:40:55.530 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=0-524287999\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:40:55.832 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:40:55.832 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:40:55.832 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:40:55.833 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 0, 951)
16-03-13 14:40:55.880 [DEBUG] [acdcli.api.content] - chunk o 0 l 524288000
16-03-13 14:40:55.880 [WARNING] [acdcli.api.backoff_req] - Waiting 12.891159s because of error(s).
16-03-13 14:40:55.880 [DEBUG] [acdcli.api.backoff_req] - Retry 7, waiting 12.891159s
16-03-13 14:41:08.783 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=0-524287999\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:41:09.964 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:41:09.966 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:41:09.966 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:41:09.966 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 131072, 951)
16-03-13 14:41:09.968 [DEBUG] [acdcli.api.content] - chunk o 131072 l 524288000
16-03-13 14:41:09.968 [WARNING] [acdcli.api.backoff_req] - Waiting 83.829851s because of error(s).
16-03-13 14:41:09.968 [DEBUG] [acdcli.api.backoff_req] - Retry 8, waiting 83.829851s
16-03-13 14:42:33.882 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content"
16-03-13 14:42:33.891 [INFO] [requests.packages.urllib3.connectionpool] - Resetting dropped connection: content-na.drive.amazonaws.com
16-03-13 14:42:33.892 [INFO] [requests.packages.urllib3.connectionpool] - Starting new HTTPS connection (702): content-na.drive.amazonaws.com
send: b'CONNECT content-na.drive.amazonaws.com:443 HTTP/1.0\r\n'
send: b'\r\n'
send: b'GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1\r\nHost: content-na.drive.amazonaws.com\r\nAccept-Encoding: gzip, deflate\r\nRange: bytes=131072-524419071\r\nConnection: keep-alive\r\nUser-Agent: acdcli.api/0.8.9 python-requests/2.9.1\r\nAuthorization: Bearer Atza|\r\nAccept: */*\r\n\r\n'
reply: 'HTTP/1.1 500 Internal Server Error\r\n'
header: Content-Type header: Date header: Server header: x-amzn-RequestId header: Content-Length header: Connection 16-03-13 14:42:35.108 [DEBUG] [requests.packages.urllib3.connectionpool] - "GET /cdproxy/nodes/bncUSLbmTiaDq9HnC07gIw/content HTTP/1.1" 500 30
16-03-13 14:42:35.109 [ERROR] [acdcli.acd_fuse] - get: RequestError: 500, {"message":"Internal failure"}
16-03-13 14:42:35.109 [DEBUG] [acdcli.acd_fuse] - <- read 28
16-03-13 14:42:35.109 [DEBUG] [acdcli.acd_fuse] - -> read /6ENeGl6NiZaV3g0zNkHRR-qs/Z1akLBynrP5iyuYgXod9zYL1/eLOKp062O8G5ga4KUlVe4IQi/mPqI-,JeTuk9lXWDTLsT4D-TvVQ,UrOcK73GZjQN7GtHj8-S61nyfkE77YA5J1V2jU7 (4096, 131072, 951)
16-03-13 14:42:35.110 [DEBUG] [acdcli.api.content] - chunk o 131072 l 524288000
16-03-13 14:42:35.110 [WARNING] [acdcli.api.backoff_req] - Waiting 170.212323s because of error(s).
16-03-13 14:42:35.111 [DEBUG] [acdcli.api.backoff_req] - Retry 9, waiting 170.212323s
@Niklasmaximilian See #283. Please contact Amazon's support.
I am finding that mine is getting unmounted every night. At first I thought it was related to #283 like @yadayada suggested. I found a few files that were 'corrupted' or had been lost (they couldn't be downloaded via web interface nor acd_cli). I have deleted those files and I still find I get unmounted.
I find I get these two entries a LOT
16-03-17 07:43:04.238 [DEBUG] [acdcli.acd_fuse] - <- release '[Unhandled Exception]'
16-03-17 07:43:04.248 [DEBUG] [acdcli.acd_fuse] - -> getattr /KxjzY5gOLel2bzkAMpHYUhuc (None,)
When it was a corrupted file I would find an "Internal Failure" near an attempted access of a file, since removing all those corrupted files I can't find "Internal Failure" anywhere in the logs.
I notice that it gets unmounted when uploading to ACD.
I am assuming the issue is relating to something in these lines here:
16-03-17 07:43:04.015 [DEBUG] [acdcli.acd_fuse] - -> getattr /KxjzY5gOLel2bzkAMpHYUhuc/y4IG9rJF1U9g,oVYUevZ2CP0/aKgvV2jharEnVcSp88XumRG1/OGKPZCtXp4jCozPUnCOta7nf/Q3omaXkXVmPD0FE4nORxb7cMyPbuOlntQGJ4Y47uhbdJCG92dme0BD9WXByWwKoDV35 (None,)
16-03-17 07:43:04.019 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-03-17 07:43:04.235 [DEBUG] [acdcli.acd_fuse] - -> flush /KxjzY5gOLel2bzkAMpHYUhuc/y4IG9rJF1U9g,oVYUevZ2CP0/aKgvV2jharEnVcSp88XumRG1/OGKPZCtXp4jCozPUnCOta7nf/Q3omaXkXVmPD0FE4nORxb7cMyPbuOlntQGJ4Y47uhbdJCG92dme0BD9WXByWwKoDV35 (406,)
16-03-17 07:43:04.236 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-17 07:43:04.236 [DEBUG] [acdcli.acd_fuse] - -> flush /KxjzY5gOLel2bzkAMpHYUhuc/y4IG9rJF1U9g,oVYUevZ2CP0/aKgvV2jharEnVcSp88XumRG1/OGKPZCtXp4jCozPUnCOta7nf/Q3omaXkXVmPD0FE4nORxb7cMyPbuOlntQGJ4Y47uhbdJCG92dme0BD9WXByWwKoDV35 (406,)
16-03-17 07:43:04.237 [DEBUG] [acdcli.acd_fuse] - <- flush None
16-03-17 07:43:04.237 [DEBUG] [acdcli.acd_fuse] - -> release /KxjzY5gOLel2bzkAMpHYUhuc/y4IG9rJF1U9g,oVYUevZ2CP0/aKgvV2jharEnVcSp88XumRG1/OGKPZCtXp4jCozPUnCOta7nf/Q3omaXkXVmPD0FE4nORxb7cMyPbuOlntQGJ4Y47uhbdJCG92dme0BD9WXByWwKoDV35 (406,)
16-03-17 07:43:04.238 [DEBUG] [acdcli.acd_fuse] - <- release '[Unhandled Exception]'
16-03-17 07:43:04.248 [DEBUG] [acdcli.acd_fuse] - -> getattr /KxjzY5gOLel2bzkAMpHYUhuc (None,)
But I can't see why, as the file Q3omaXkXVmPD0FE4nORxb7cMyPbuOlntQGJ4Y47uhbdJCG92dme0BD9WXByWwKoDV35
was accessed only a few lines before successfully.
This morning I have had it unmount several times on me. Here is a link to the log: https://dl.dropboxusercontent.com/u/6390060/acd_cli.log
Hopefully this helps. Thanks,
@hjone72
Which OS are you using? Please also give me a log obtained by using acdcli -d mount -fg /mount/point
.
Addendum: I did not suggest the original issue was related to #238.
I'm having the same issue as @hjone72 It's getting unmounted while uploading files with acd_cli upload. If I don't upload it's stable. I'm using Ubuntu 14.10 server (minimal).
That may line up with the issues I had and sounds plausible.
However I have not encountered it since and have been uploading :confused:
I was using rclone to upload (as I can limit bandwidth) but "committing" with an acd_cli sync - and this was causing the issue too.
However, this was only on a newer acd_cli build on a newer VPS. I moved back to my older box with an older acd_cli version on it, and I now have 0 issues.
On Thu, Mar 17, 2016 at 12:55 PM, Marcus Stewart Hughes < notifications@github.com> wrote:
That may line up with the issues I had and sounds plausible.
However I have not encountered it since and have been uploading [image: :confused:]
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/yadayada/acd_cli/issues/290#issuecomment-197866948
@yadayada It umounted over night but I haven't been able to get it to unmount this morning after remounting it. I'll keep playing and post the log as soon as i get it. Just to confirm, I am running this command:
acdcli -d mount -fg <mount/point> &>> <path/to/log.log>
Do you want the logs from last night when it unmounted?
I am running Ubuntu 15.10.
Thanks,
EDIT: It finally crashed this morning but I suspect that may have been caused by terminal timeout. I've remounted and prevented that from happening. I won't post the log until I see it fail again.
Okay, so for the first time in over a week acd didn't unmount over night.
Would running it in foreground do anything?
Thanks,
I believe d enables debug logging so just watch your log files as not sure if they rotate
On Friday, 18 March 2016, hjone72 notifications@github.com wrote:
Okay, so for the first time in over a week acd didn't unmount over night. What does the -d do?
Would running it in foreground do anything?
Thanks,
— You are receiving this because you commented. Reply to this email directly or view it on GitHub https://github.com/yadayada/acd_cli/issues/290#issuecomment-198561197
After doing some trail and error I have found that I am also getting a similar result as @geekcroft. I have started using RClone to upload files to ACD. Making this change has prevented the mount from unmounting overnight however once I run ACDCLI Sync it unmounts.
Here is my latest log from when it has unmounted. https://dl.dropboxusercontent.com/u/6390060/logs/acdcli/2016.03.22-12pm.log
I am going to try and get that foreground log again tonight. Thanks,
EDIT: So once again overnight ACD has remained mounted when using the -fg parameter. Any ideas why this might be the case?
So once again overnight ACD has remained mounted when using the -fg parameter.
I also had this problem with having to unmount and mount my acd folder daily, sometimes multiple times, and ls
taking a long time. However, since mounting with the -fg several days ago, it has been stable. Thanks for the tip.
I am now running acd_cli using foreground and byobu (screen) and have not had the issue since. I have also stopped using acdcli upload and started using rclone. I believe it may have also been to do with sync running on top of itself as per #301 Thanks for everyones assistance
After upgrading to the latest repo last week, my fuse read mount started breaking every few days. This never happened with a previous build (from Feb)
Here are my mount point script that runs on boot
acd_cli mount /home/msc/.acd-sorted/ acd_cli sync encfs --extpass='echo pass' /home/msc/.local-sorted /home/msc/local-sorted ENCFS6_CONFIG='/home/msc/encfs.xml' encfs --extpass='echo pass' /home/msc/.acd-sorted /home/msc/acd-sorted unionfs-fuse -o cow /home/msc/local-sorted=RW:/home/msc/acd-sorted=RO /home/msc/sorted/
Update:I tried removing the acdcli sync command from both my nightly upload script and mount script as it seems like others were having overlapping syncing issues, but i'm still have the same issue. @yadayada , any logs you want me to provide?
First off, I have deleted quite a few unrelated comments in this thread.
To everyone who experiences unmounting issues and wants to provide a log, please apply the following small patch:
diff --git a/acd_cli.py b/acd_cli.py
index ac3370d..68c0006 100755
--- a/acd_cli.py
+++ b/acd_cli.py
@@ -38,6 +38,8 @@ for plug_mod in iter_entry_points(group='acdcli.plugins', name=None):
_app_name = 'acd_cli'
+sys.stderr = open('acd_cli.log', 'w')
+
logger = logging.getLogger(_app_name)
# path settings
And then mount in debug mode without setting the foreground
flag. I hope this will log your unhandled exceptions (into a "acd_cli.log" in the invocation path).
I manually updated acd_cli.py as i couldn't figure out how to patch it, and i've launched the mount in debug mode. Waiting for unmount patiently ;)
I get these unhandled exceptions on getattr /
when uploading whilst having the filesystem mounted.
16-04-24 19:24:41.179 [DEBUG] [acdcli.acd_fuse] - -> getattr / (None,)
16-04-24 19:24:41.179 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
Traceback (most recent call last):
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/bundled/fuse.py", line 432, in _wrapper
return func(*args, **kwargs) or 0
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/bundled/fuse.py", line 440, in getattr
return self.fgetattr(path, buf, None)
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/bundled/fuse.py", line 686, in fgetattr
attrs = self.operations('getattr', path.decode(self.encoding), fh)
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/acd_fuse.py", line 360, in __call__
ret = getattr(self, op)(path, *args)
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/acd_fuse.py", line 441, in getattr
node = self.cache.resolve(path)
File "/home/cms/MEGA/PycharmProjects/aCloud/acdcli/cache/query.py", line 160, in resolve
c.execute(NODE_BY_ID_SQL, [self.root_id])
sqlite3.DatabaseError: database disk image is malformed
i haven't had a fuse unmount ever since i enabled debug logging... strange.
I too am experiencing this issue after upgrading to the latest stable version of api 0.8.9. I would downgrade but I really do not recall what version I was running previously that was perfectly stable. My n ightly upload script used to call acd_cli sync before and after the upload process which now breaks the mount immediately everytime. I removed the sync step and the upload works sometimes but still kills the mount frequently. This process was working flawlessly since November (kicking myself for upgrading needlessly). I'm travelling this week but will enable the debugging when I return next week to help track down the issue.
the fusemount just errored out again. Attached debug logs. acdlogs.zip
I'm still having this issue on a new server, anything I can provide to help diagnose?
@yadayada Forgive me for being ignorant, but am I literally just running that patch as a script and if so, where do I want/need to be to run it?
I'm on Ubuntu 15.10 and installed via "pip3 install --upgrade git+https://github.com/yadayada/acd_cli.git " command.
@yadayada I'm still getting this error a lot, it seems to occur after running the sync command a 2nd time since mount and reading files. I've added the code in comment #290 with no luck in the logs:
16-06-06 22:08:41.883 [INFO] [acdcli.api.metadata] - Getting changes with checkpoint "CJH02sHSKhoA".
16-06-06 22:08:41.883 [INFO] [acdcli.api.backoff_req] - POST "https://cdws.us-east-1.amazonaws.com/drive/v1/changes"
16-06-06 22:08:41.883 [DEBUG] [acdcli.api.backoff_req] - {"checkpoint": "CJH02sHSKhoA", "includePurged": "true"}
16-06-06 22:08:41.885 [INFO] [requests.packages.urllib3.connectionpool] - Starting new HTTPS connection (1): cdws.us-east-1.amazonaws.com
16-06-06 22:08:42.251 [DEBUG] [requests.packages.urllib3.connectionpool] - "POST /drive/v1/changes HTTP/1.1" 200 None
16-06-06 22:08:42.251 [DEBUG] [acdcli.api.backoff_req] - x-amzn-RequestId: c1d00866-2c54-11e6-91d6-955938fe3b09
16-06-06 22:08:42.252 [DEBUG] [acdcli.api.metadata] - Checkpoint: CIHK58TSKhqrAQgsGqYBARQAAAAUihG-Ti7hvb6q7gZADJ1yxyttZIcJKtLBecuBiqYT1lMk3lqOY2c4NW2uiKoojqI2D0u6z2L6jtxWTWAq-
gBqOPsGCkZIkD1KBDy4_NAIjIpMrRRrOmviutqzORKO6LmkIaJRo4f76qk2SA_6n-ljXDPKqrlhpPg66ubtfKJu-5rW--6gfqMyHjILmp8LGEIggnLOavrq6OhuvuayDWIqcA==
16-06-06 22:08:42.258 [INFO] [acdcli.cache.sync] - Inserted/updated 22 folder(s).
16-06-06 22:08:42.264 [INFO] [acdcli.cache.sync] - Inserted/updated 22 file(s).
16-06-06 22:08:42.269 [INFO] [acdcli.cache.sync] - Parented 44 node(s).
16-06-06 22:08:42.271 [INFO] [acdcli.api.metadata] - 1 page(s) in changes.
16-06-06 22:08:44.308 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-06 22:08:44.309 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-06-06 22:08:44.321 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-06 22:08:44.322 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-06-06 22:08:44.322 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-06 22:08:44.322 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-06-06 22:08:44.323 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-06 22:08:44.324 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-06-06 22:08:44.325 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-06 22:08:44.325 [DEBUG] [acdcli.acd_fuse] - <- getattr '[Unhandled Exception]'
16-06-06 22:08:44.348 [DEBUG] [acdcli.acd_fuse] - -> destroy / ()
16-06-06 22:08:44.348 [DEBUG] [acdcli.acd_fuse] - <- destroy None
Although here's what I got when I added this to class LoggingMixIn in acd_fuse.py
except Exception as e:
logging.exception(str(e))
16-06-07 00:15:13.458 [DEBUG] [acdcli.acd_fuse] - <- getattr 'database disk image is malformed'
16-06-07 00:15:13.458 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-07 00:15:13.458 [ERROR] [root] - database disk image is malformed
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/acdcli/acd_fuse.py", line 363, in __call__
ret = getattr(self, op)(path, *args)
File "/usr/local/lib/python3.5/dist-packages/acdcli/acd_fuse.py", line 445, in getattr
node = self.cache.resolve(path)
File "/usr/local/lib/python3.5/dist-packages/acdcli/cache/query.py", line 167, in resolve
c.execute(CHILD_OF_SQL, [segment, parent])
sqlite3.DatabaseError: database disk image is malformed
unmounted, sync and remount it worked again. synced 2 more times and got below
16-06-07 00:17:16.997 [DEBUG] [acdcli.acd_fuse] - <- getattr 'disk I/O error'
16-06-07 00:17:16.997 [DEBUG] [acdcli.acd_fuse] - -> getattr /Misc (None,)
16-06-07 00:17:16.997 [ERROR] [root] - disk I/O error
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/acdcli/acd_fuse.py", line 363, in __call__
ret = getattr(self, op)(path, *args)
File "/usr/local/lib/python3.5/dist-packages/acdcli/acd_fuse.py", line 445, in getattr
node = self.cache.resolve(path)
File "/usr/local/lib/python3.5/dist-packages/acdcli/cache/query.py", line 167, in resolve
c.execute(CHILD_OF_SQL, [segment, parent])
sqlite3.OperationalError: disk I/O error
Please let me know if I can be of assistance in debugging this and thank you for all your great work.
Removing ~/.cache/acd_cli/nodes.db and then re-syncing and re-mounting appears to fix this, at least temporarily. I'll report back if it breaks again.
I am getting this issue again after switching to own oauth handler. I did not have it since March so I find it unlikely the two are not related.
Going to update to latest master now and see if it keeps happening.
I am not too sure what's going on here, I recently updated to the version of
acd_cli
from master, and have issues with the FUSE read mount. This has happened more than once now.The last two lines of this log just loop thousands and thousands of times.