Closed silenceleaf closed 6 years ago
Yes please. Run with --verbose on both the --first-sync run and the second run. Also please carefully read troubleshooting.md and readme.md.
Cjn
On Thu, Sep 6, 2018, 9:40 PM silenceleaf notifications@github.com wrote:
When I sync my dropbox with rclone, first time running everything works great, but second time it remind me missing lsl files, and let me using --first-run parameter.
do you need more detail log?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/cjnaz/rclonesync-V2/issues/10, or mute the thread https://github.com/notifications/unsubscribe-auth/AOKq4RlTv354bOKMNedLZZIn5iu2e7Jaks5uYfi4gaJpZM4WeK1s .
**log is here
I couple interesting things here.
It looks like your prompt (Sep 11 23:16:08 server1 python3[343]:
) is being printed on all lines. How are you invoking rclonesync to cause this to happen (just in case this has something to do with the problem)?
rclone is generating the error when trying to get the file list (lsl) for dropbox:. The problem is likely in the setup of rclone, or a temporary problem with dropbox. 2018/09/11 23:16:08 ERROR : : error listing: Post https://api.dropboxapi.com/2/files/list_folder: dial tcp: lookup api.dropboxapi.com: Temporary failure in name resolution
Notice the '/'s in the date stamp.
What version of rclone are you using? Please run and copy/paste rclone -V
to this thread. I am running with:
$ rclone -V
rclone v1.42
- os/arch: linux/amd64
- go version: go1.10.1
I also ran with the latest beta without errors:
$ rclone -V
rclone v1.43-083-gb18595ae-beta
- os/arch: linux/amd64
- go version: go1.11
Try running the rclone lsl manually from the command line: rclone lsl dropbox: --verbose
, as suggested in https://github.com/cjnaz/rclonesync-V2/blob/master/TROUBLESHOOTING.md
When you figure this out, please post the root cause and solution.
Any news? Did you get this resolved, and may I close this issue? If so, what was the issue, for others' benefit.
No, the service running very smooth for coupe of days, the loosing lsl file error go back!
Background:
The rclonesync is scheduled by systemd-timer, running every 4 hour
Seems error happens when connect to dropbox failure, and then all afterwards sync attempts will be failure(Must manually run --first-sync to recover.).
As my understand, for a robust system, one time failure should not affect afterwards operation.
log is here:
Sep 22 12:28:09 server1 systemd[1]: Started Rclone Sync Dropbox.
Sep 22 12:28:09 server1 python3[358]: 2018-09-22 12:28:09,963: BiDirectional Sync for Cloud Services using rclone
Sep 22 12:28:10 server1 python3[358]: 2018-09-22 12:28:09,964: Synching Path1
Ah. Turn on the check access feature. You will need to place one or more RCLONE_TEST files in your sync tree. See the documentation. This feature is intended to address the intermittent cloud service access issue gracefully. If the run fails to find the test files it will abort and try again next time. I run this script every 30min for months without a critical abort, fyi.
Any luck with --check-access? Can I close this issue?
fyi, new version 2.3 just posted.
Sorry for the late response!
I think that's not a big deal, you can close this thread.
I have a small change request here:
if I use "--check-access" option, rather than make the file name strictly "RCLONE_TEST", let's make it contains "RCLONE_TEST". such as I want to use ".RCLONE_TEST"(make it hidden file) or "RCLONE_TEST.txt" (write something inside)
Thanks for this handy tools, thanks for your effort!
Good idea. Alternatively, I might add a switch to allow selection of the test filename, overriding the current default. or wildcarding the filter ( RCLONE_TEST) may be cheap enough, but allowing user selection is most flexible and cheap/easy to implement. Sound reasonable?
Added --check-filename
switch in V2.4.
Good idea. Alternatively, I might add a switch to allow selection of the test filename, overriding the current default. or wildcarding the filter ( RCLONE_TEST) may be cheap enough, but allowing user selection is most flexible and cheap/easy to implement. Sound reasonable?
sounds good!
I'm facing this error now even with v2.4 and using RCLONE_TEST with --check-filename. Which logs would you require? I'll add some here as I get them.
EDIT: Logs
This current issue (#10) is closed, and the issues in your log are not related to issue #10. Your log shows a couple issues: 1 - Google doc files cannot be synced. See the README. 2 - I've not tried symlinks. I suggest that you try turning on rclone's --copy-links switch in the rclone.conf file or via an environment variable. See the https://rclone.org/docs/#environment-variables
Well, the problem is not actually with the logs. I am actually facing the issue of losing the lsl files in ~/.rclonesyncwd, which seems like it's not getting reflected in the logs. I am also aware of the google docs thing, and I'm fine with them not being synced.
How would you suggest to get logs the next time it happens? Because the lsl files are lost at random days (approx. twice a week), I have no idea as to catch the reason for the error when it occurs.
Do you also have --check-access
turned on? --check-filename
is optional and only needed if you want to override the default check filename.
-c, --check-access Ensure expected RCLONE_TEST files are found on both
path1 and path2 filesystems, else abort.
--check-filename CHECK_FILENAME
Filename for --check-access (default is
<RCLONE_TEST>).
Yup. I added the RCLONE_TEST file in the root directory for all of my cloud storage drives and started using that flag ever since this bug started occurring for me.
I have now started logging every sync into a logfile, and I'll report here when I face this issue next.
Just to be sure, post the Command line: <Namespace(Path1=...
line from your non --first-sync
normal runs.
The command is this:
/mnt/Data/Programs/Bash/rclonesync.py --check-access "GDrive:" "/mnt/Data/Google Drive"
This is what that line says:
2018-11-20 21:30:01,259: Synching Path1 <GDrive:> with Path2 </mnt/Data/Google Drive/>
For debug, turn on verbose (--verbose
) and rclone verbose (--rc-verbose
), and redirect the run output (concatenated) to a file, as in the Running from cron example https://github.com/cjnaz/rclonesync-V2/blob/master/TROUBLESHOOTING.md#running-from-cron.
Did it. Here's the output:
Command line: <Namespace(Path1='GDrive:', Path2='/mnt/Data/Google Drive', check_access=True, check_filename='RCLONE_TEST', dry_run=False, filters_file=None, first_sync=False, force=False, max_deletes=50, no_datetime_log=False, rc_verbose=1, rclone='rclone', remove_empty_directories=False, verbose=True, workdir='/home/rharish/.rclonesyncwd')>
The invocation (command line) looks fine. Let it run with output to the log file until it fails.
Lost the lsl files for two cloud sync setups (Google Drive and Dropbox; Box wasn't affected). Here's the part of the logfile for today (22/11/2018): rclonesync.log
I suspect that this happens when rclonesync gets killed, probably when shutting down at a time when the cron job for rclonesync is running.
The first error is that the Path2 CS330A/assignment-4/Makefile
was no longer there when rclonesync went to copy it to Path1. This is a case where rclonesync flags that something has happened that cannot be safely recovered from, so it moves the LSL history files.
What's puzzling is that then immediately after, on log line 20904, the Dropbox: run failed to find its LSL files. The prior GDrive: failure should not have touched the Dropbox: LSL files. Does GDRive: and Dropbox: always/frequently die in pairs? Check the LSL files in the working directory for clues. And then the immediately following Box: run fails gracefully (fails the access check).
You might want to weigh in on https://github.com/cjnaz/rclonesync-V2/issues/8. The missing Makefile case is missing from the discussion since it doesn't have a risk of data loss, but is not well handled.
If you can narrow in on the issue, please open a new issue.
When I sync my dropbox with rclone, first time running everything works great, but second time it remind me missing lsl files, and let me using --first-run parameter.
do you need more detail log?