alex-phillips / node-clouddrive

Node SDK and CLI for Amazon's Cloud Drive
74 stars 18 forks source link

clouddrive sync taking long #13

Closed kpabba closed 8 years ago

kpabba commented 8 years ago

Hi,

I ran the clouddrive sync for the first time, this in theory should sync the local cache with ACD (no actual file sync). There is about 40GB (30,000 pics or so)of data on ACD, it's been about 24 hours and the sync is still going on.

Questions:

  1. Is this normal for the sync to take this long for that amount of data?
  2. How do I know what exactly is going on, all I see is a message Syncing..., I don't see a verbose option or something which I can use to see more details or is there one and I'm missing that info?

Thanks

alex-phillips commented 8 years ago

@kpabba Syncing could take some time, but it should not take more than a few minutes, maximum, for 30k files. I have over 1TB of data and with the current release, it think it takes my laptop 5-10 minutes. (Note: I have made performance modifications in the master branch for the next release that reducing syncing time significantly).

Although it's in development, I would recommend taking the master branch and try syncing with that. There is the possibility that the syncing ran out of time before the access token was refreshed (another issue resolved in the master branch). I would ctrl+c to stop the process and start it up again.

Verbose logging is also something I'm working on adding and will be in the next release.

Let me know if this doesn't solve your issue.

kpabba commented 8 years ago

Thanks for a quick reply Alex.

Although it's in development, I would recommend taking the master branch and try syncing with that.

Is there a command to do this, any documentation would help. For initial installation, I followed the readme doc. Below is the version I have now.

pi@raspberrypi:/media/pi/SETTINGS $ clouddrive -V
0.2.2

There is the possibility that the syncing ran out of time before the access token was refreshed (another issue resolved in the master branch). I would ctrl+c to stop the process and start it up again.

Ok, I tried what you said, cancelled the sync and started it again but didn't finish yet after 30 mins. I cancelled it again. Below is some more info which might be helpful.

pi@raspberrypi:/media/pi/SETTINGS $ clouddrive ls
No node by name 'Cloud Drive' found in the local cache
pi@raspberrypi:/media/pi/SETTINGS $ clouddrive info
{"termsOfUse":"1.0.0","status":"ACTIVE"}
pi@raspberrypi:/media/pi/SETTINGS $ clouddrive usage
{"other":{"total":{"bytes":1743296206,"count":82988},"billable":{"bytes":0,"count":0}},"doc":{"total":
{"bytes":39339,"count":318},"billable":{"bytes":0,"count":0}},"photo":{"total":
{"bytes":40247239198,"count":85337},"billable":{"bytes":0,"count":0}},"video":{"total":
{"bytes":0,"count":0},"billable":{"bytes":0,"count":0}},"lastCalculated":"2015-12-31T02:00:05.818Z"}
pi@raspberrypi:/media/pi/SETTINGS $ 

Now, finally, what is the format I use to connect to [dest] ACD in the below command.

upload [options] <src> [dest]    Upload local file or folder to remote directory
alex-phillips commented 8 years ago

Is there a command to do this, any documentation would help. For initial installation, I followed the readme doc. Below is the version I have now.

First do an npm uninstall -g clouddrive to remove the current version. Then, if you clone the master branch (git clone git@github.com:alex-phillips/node-clouddrive.git), cd into that directory and simply run npm install -g . and it will install that version globally. Note: although verbose output is an available option in the new branch, there is currently no extra information provided for the sync command).

If the current master branch doesn't help (my guess is that you should have a finished sync in a few minutes depending on your hardware spec), let me know.

Also, if you use the sqlite3 command line, you can open up the cache database and make sure there are actually nodes being written (select count(*) from nodes;).

Let me know!

kpabba commented 8 years ago

Uninstalled successfully and tried to clone the master branch with below error.

pi@raspberrypi:~/Desktop $ git clone git@github.com:alex-phillips/node-clouddrive.git
Cloning into 'node-clouddrive'...
Permission denied (publickey).
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
pi@raspberrypi:~/Desktop $ 
alex-phillips commented 8 years ago

Hrm... very unusual. Cloning should not need any key access.

What you could do instead is download the zip of the repo and unzip and install from there:

wget https://github.com/alex-phillips/node-clouddrive/archive/master.zip
kpabba commented 8 years ago

Thanks, that worked but still didn't solve the initial problem of sync though.

Tried this 1st... which resulted in error

pi@raspberrypi:~/Downloads/node-clouddrive-master $ sudo npm install -g .
gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.node-gyp/5.2.0"
gyp WARN EACCES attempting to reinstall using temporary dev dir "/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/.node-gyp"
make: Entering directory '/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/build'
make: *** No rule to make target '../.node-gyp/5.2.0/include/node/common.gypi', needed by 'Makefile'.  Stop.
make: Leaving directory '/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/build'
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:270:23)
gyp ERR! stack     at emitTwo (events.js:88:13)
gyp ERR! stack     at ChildProcess.emit (events.js:173:7)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:201:12)
gyp ERR! System Linux 4.1.13-v7+
gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "build" "--fallback-to-build" "--module=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm/node_sqlite3.node" "--module_name=node_sqlite3" "--module_path=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm"
gyp ERR! cwd /usr/local/lib/node_modules/clouddrive/node_modules/sqlite3
gyp ERR! node -v v5.2.0
gyp ERR! node-gyp -v v3.0.3
gyp ERR! not ok 
node-pre-gyp ERR! build error 
node-pre-gyp ERR! stack Error: Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm/node_sqlite3.node --module_name=node_sqlite3 --module_path=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm' (1)
node-pre-gyp ERR! stack     at ChildProcess.<anonymous> (/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/node_modules/node-pre-gyp/lib/util/compile.js:83:29)
node-pre-gyp ERR! stack     at emitTwo (events.js:88:13)
node-pre-gyp ERR! stack     at ChildProcess.emit (events.js:173:7)
node-pre-gyp ERR! stack     at maybeClose (internal/child_process.js:819:16)
node-pre-gyp ERR! stack     at Process.ChildProcess._handle.onexit (internal/child_process.js:212:5)
node-pre-gyp ERR! System Linux 4.1.13-v7+
node-pre-gyp ERR! command "/usr/local/bin/node" "/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/node_modules/.bin/node-pre-gyp" "install" "--fallback-to-build"
node-pre-gyp ERR! cwd /usr/local/lib/node_modules/clouddrive/node_modules/sqlite3
node-pre-gyp ERR! node -v v5.2.0
node-pre-gyp ERR! node-pre-gyp -v v0.6.14
node-pre-gyp ERR! not ok 
Failed to execute '/usr/local/bin/node /usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js build --fallback-to-build --module=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm/node_sqlite3.node --module_name=node_sqlite3 --module_path=/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm' (1)
npm WARN install:sqlite3@3.1.1 sqlite3@3.1.1 install: `node-pre-gyp install --fallback-to-build`
npm WARN install:sqlite3@3.1.1 Exit status 1
/usr/local/lib
└── (empty)

npm ERR! code 1

Then tried --unsafe-perm as suggested in node-gps #454 with the below result. Looks like a successful installation?

pi@raspberrypi:~/Downloads/node-clouddrive-master $ sudo npm install --unsafe-perm -g .
/usr/local/bin/clouddrive -> /usr/local/lib/node_modules/clouddrive/bin/clouddrive.js

> sqlite3@3.1.1 install /usr/local/lib/node_modules/clouddrive/node_modules/sqlite3
> node-pre-gyp install --fallback-to-build

make: Entering directory '/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/build'
  ACTION deps_sqlite3_gyp_action_before_build_target_unpack_sqlite_dep Release/obj/gen/sqlite-autoconf-3090100/sqlite3.c
  TOUCH Release/obj.target/deps/action_before_build.stamp
  CC(target) Release/obj.target/sqlite3/gen/sqlite-autoconf-3090100/sqlite3.o
  AR(target) Release/obj.target/deps/sqlite3.a
  COPY Release/sqlite3.a
  CXX(target) Release/obj.target/node_sqlite3/src/database.o
../src/database.cc: In static member function ‘static void node_sqlite3::Database::Work_BeginOpen(node_sqlite3::Database::Baton*)’:
../src/database.cc:143:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),
         ^
../src/database.cc: In static member function ‘static void node_sqlite3::Database::Work_BeginClose(node_sqlite3::Database::Baton*)’:
../src/database.cc:227:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),
         ^
../src/database.cc: In static member function ‘static void node_sqlite3::Database::Work_BeginExec(node_sqlite3::Database::Baton*)’:
../src/database.cc:505:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),
         ^
../src/database.cc: In static member function ‘static void node_sqlite3::Database::Work_BeginLoadExtension(node_sqlite3::Database::Baton*)’:
../src/database.cc:605:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),
         ^
  CXX(target) Release/obj.target/node_sqlite3/src/node_sqlite3.o
  CXX(target) Release/obj.target/node_sqlite3/src/statement.o
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginPrepare(node_sqlite3::Database::Baton*)’:
../src/statement.cc:118:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),
         ^
In file included from ../src/statement.cc:6:0:
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginBind(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:322:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(Bind);
     ^
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginGet(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:370:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(Get);
     ^
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginRun(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:438:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(Run);
     ^
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginAll(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:504:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(All);
     ^
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginEach(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:601:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(Each);
     ^
../src/statement.cc: In static member function ‘static void node_sqlite3::Statement::Work_BeginReset(node_sqlite3::Statement::Baton*)’:
../src/macros.h:125:9: warning: unused variable ‘status’ [-Wunused-variable]
     int status = uv_queue_work(uv_default_loop(),                              \
         ^
../src/statement.cc:724:5: note: in expansion of macro ‘STATEMENT_BEGIN’
     STATEMENT_BEGIN(Reset);
     ^
  SOLINK_MODULE(target) Release/obj.target/node_sqlite3.node
  COPY Release/node_sqlite3.node
  COPY /usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/lib/binding/node-v47-linux-arm/node_sqlite3.node
  TOUCH Release/obj.target/action_after_build.stamp
make: Leaving directory '/usr/local/lib/node_modules/clouddrive/node_modules/sqlite3/build'
/usr/local/lib
└─┬ clouddrive@0.2.2 
  ├── async@1.5.0 
  ├── colors@1.1.2 
  ├─┬ commander@2.9.0 
  │ └── graceful-readlink@1.0.1 
  ├── elegant-spinner@1.0.1 
  ├── heredoc@1.3.1 
  ├─┬ inquirer@0.10.1 
  │ ├── ansi-escapes@1.1.0 
  │ ├── ansi-regex@2.0.0 
  │ ├─┬ chalk@1.1.1 
  │ │ ├── ansi-styles@2.1.0 
  │ │ ├── escape-string-regexp@1.0.4 
  │ │ ├── has-ansi@2.0.0 
  │ │ └── supports-color@2.0.0 
  │ ├─┬ cli-cursor@1.0.2 
  │ │ └─┬ restore-cursor@1.0.1 
  │ │   ├── exit-hook@1.1.1 
  │ │   └── onetime@1.1.0 
  │ ├── cli-width@1.1.0 
  │ ├── figures@1.4.0 
  │ ├── lodash@3.10.1 
  │ ├─┬ readline2@1.0.1 
  │ │ ├─┬ code-point-at@1.0.0 
  │ │ │ └── number-is-nan@1.0.0 
  │ │ ├── is-fullwidth-code-point@1.0.0 
  │ │ └── mute-stream@0.0.5 
  │ ├─┬ run-async@0.1.0 
  │ │ └─┬ once@1.3.3 
  │ │   └── wrappy@1.0.1 
  │ ├── rx-lite@3.1.2 
  │ ├── strip-ansi@3.0.0 
  │ └── through@2.3.8 
  ├─┬ knex@0.8.6 
  │ ├── bluebird@2.10.2 
  │ ├─┬ debug@2.2.0 
  │ │ └── ms@0.7.1 
  │ ├── inherits@2.0.1 
  │ ├── interpret@0.5.2 
  │ ├─┬ liftoff@2.0.3 
  │ │ ├── extend@2.0.1 
  │ │ ├─┬ findup-sync@0.2.1 
  │ │ │ └─┬ glob@4.3.5 
  │ │ │   ├── inflight@1.0.4 
  │ │ │   └─┬ minimatch@2.0.10 
  │ │ │     └─┬ brace-expansion@1.1.2 
  │ │ │       ├── balanced-match@0.3.0 
  │ │ │       └── concat-map@0.0.1 
  │ │ ├── flagged-respawn@0.3.1 
  │ │ └── resolve@1.1.6 
  │ ├── minimist@1.1.3 
  │ ├─┬ mkdirp@0.5.1 
  │ │ └── minimist@0.0.8 
  │ ├─┬ pool2@1.3.0 
  │ │ ├── double-ended-queue@2.1.0-0 
  │ │ ├── hashmap@2.0.4 
  │ │ └── simple-backoff@1.0.0 
  │ ├─┬ readable-stream@1.1.13 
  │ │ ├── core-util-is@1.0.2 
  │ │ ├── isarray@0.0.1 
  │ │ └── string_decoder@0.10.31 
  │ ├─┬ tildify@1.0.0 
  │ │ └── user-home@1.1.1 
  │ └── v8flags@2.0.11 
  ├── moment@2.10.6 
  ├── open@0.0.5 
  ├── progress@1.1.8 
  ├─┬ promise@7.1.1 
  │ └── asap@2.0.3 
  ├─┬ request@2.67.0 
  │ ├── aws-sign2@0.6.0 
  │ ├─┬ bl@1.0.0 
  │ │ └─┬ readable-stream@2.0.5 
  │ │   ├── process-nextick-args@1.0.6 
  │ │   └── util-deprecate@1.0.2 
  │ ├── caseless@0.11.0 
  │ ├─┬ combined-stream@1.0.5 
  │ │ └── delayed-stream@1.0.0 
  │ ├── extend@3.0.0 
  │ ├── forever-agent@0.6.1 
  │ ├── form-data@1.0.0-rc3 
  │ ├─┬ har-validator@2.0.3 
  │ │ ├─┬ is-my-json-valid@2.12.3 
  │ │ │ ├── generate-function@2.0.0 
  │ │ │ ├─┬ generate-object-property@1.2.0 
  │ │ │ │ └── is-property@1.0.2 
  │ │ │ ├── jsonpointer@2.0.0 
  │ │ │ └── xtend@4.0.1 
  │ │ └─┬ pinkie-promise@2.0.0 
  │ │   └── pinkie@2.0.1 
  │ ├─┬ hawk@3.1.2 
  │ │ ├── boom@2.10.1 
  │ │ ├── cryptiles@2.0.5 
  │ │ ├── hoek@2.16.3 
  │ │ └── sntp@1.0.9 
  │ ├─┬ http-signature@1.1.0 
  │ │ ├── assert-plus@0.1.5 
  │ │ ├─┬ jsprim@1.2.2 
  │ │ │ ├── extsprintf@1.0.2 
  │ │ │ ├── json-schema@0.2.2 
  │ │ │ └── verror@1.3.6 
  │ │ └─┬ sshpk@1.7.1 
  │ │   ├── asn1@0.2.3 
  │ │   ├── assert-plus@0.2.0 
  │ │   ├── dashdash@1.10.1 
  │ │   ├── ecc-jsbn@0.1.1 
  │ │   ├── jodid25519@1.0.2 
  │ │   ├── jsbn@0.1.0 
  │ │   └── tweetnacl@0.13.2 
  │ ├── is-typedarray@1.0.0 
  │ ├── isstream@0.1.2 
  │ ├── json-stringify-safe@5.0.1 
  │ ├─┬ mime-types@2.1.8 
  │ │ └── mime-db@1.20.0 
  │ ├── node-uuid@1.4.7 
  │ ├── oauth-sign@0.8.0 
  │ ├── qs@5.2.0 
  │ ├── stringstream@0.0.5 
  │ ├── tough-cookie@2.2.1 
  │ └── tunnel-agent@0.4.2 
  └─┬ sqlite3@3.1.1 
    ├── nan@2.1.0 
    └─┬ node-pre-gyp@0.6.14
      ├─┬ mkdirp@0.5.1 
      │ └── minimist@0.0.8 
      ├─┬ npmlog@1.2.1
      │ └─┬ are-we-there-yet@1.0.4
      │   └─┬ readable-stream@1.1.13 
      │     ├── inherits@2.0.1 
      │     ├── isarray@0.0.1 
      │     └── string_decoder@0.10.31 
      ├─┬ request@2.64.0
      │ ├─┬ bl@1.0.0 
      │ │ └─┬ readable-stream@2.0.2
      │ │   ├── inherits@2.0.1 
      │ │   ├── isarray@0.0.1 
      │ │   └── string_decoder@0.10.31 
      │ ├── caseless@0.11.0 
      │ ├─┬ combined-stream@1.0.5 
      │ │ └── delayed-stream@1.0.0 
      │ ├── extend@3.0.0 
      │ ├── forever-agent@0.6.1 
      │ ├── form-data@1.0.0-rc3 
      │ ├─┬ har-validator@1.8.0
      │ │ ├── bluebird@2.10.2 
      │ │ ├─┬ chalk@1.1.1 
      │ │ │ ├── ansi-styles@2.1.0 
      │ │ │ ├─┬ has-ansi@2.0.0 
      │ │ │ │ └── ansi-regex@2.0.0 
      │ │ │ ├─┬ strip-ansi@3.0.0 
      │ │ │ │ └── ansi-regex@2.0.0 
      │ │ │ └── supports-color@2.0.0 
      │ │ ├─┬ commander@2.8.1
      │ │ │ └── graceful-readlink@1.0.1 
      │ │ └─┬ is-my-json-valid@2.12.2
      │ │   ├── generate-function@2.0.0 
      │ │   ├─┬ generate-object-property@1.2.0 
      │ │   │ └── is-property@1.0.2 
      │ │   └── jsonpointer@2.0.0 
      │ ├─┬ hawk@3.1.0
      │ │ ├── cryptiles@2.0.5 
      │ │ ├── hoek@2.16.3 
      │ │ └── sntp@1.0.9 
      │ ├─┬ http-signature@0.11.0
      │ │ └── assert-plus@0.1.5 
      │ ├── isstream@0.1.2 
      │ ├── json-stringify-safe@5.0.1 
      │ └── oauth-sign@0.8.0 
      ├─┬ rimraf@2.4.3
      │ └─┬ glob@5.0.15
      │   ├─┬ inflight@1.0.4 
      │   │ └── wrappy@1.0.1 
      │   ├── inherits@2.0.1 
      │   ├─┬ minimatch@3.0.0
      │   │ └─┬ brace-expansion@1.1.1
      │   │   └── concat-map@0.0.1 
      │   └─┬ once@1.3.2
      │     └── wrappy@1.0.1 
      ├─┬ tar@2.2.1
      │ └── inherits@2.0.1 
      └─┬ tar-pack@2.0.0
        ├─┬ fstream@0.1.31
        │ └── inherits@2.0.1 
        ├─┬ fstream-ignore@0.0.7
        │ └── inherits@2.0.1 
        ├─┬ readable-stream@1.0.33
        │ ├── inherits@2.0.1 
        │ ├── isarray@0.0.1 
        │ └── string_decoder@0.10.31 
        └─┬ tar@0.1.20
          └── inherits@2.0.1 

pi@raspberrypi:~/Downloads/node-clouddrive-master $ 

Initiated sync and it's still going on after 10 minutes... surprisingly, version still shows the same as it was before the uninstall and sqlite3 command doesn't exist. Didn't do the init this time.

pi@raspberrypi:~/Downloads $ which sqlite3
pi@raspberrypi:~/Downloads $ clouddrive -V
0.2.2
pi@raspberrypi:~/Downloads $ 
alex-phillips commented 8 years ago

I haven't increased the master branch's version number yet, however, you can tell it is a newer version as the help screen output is slightly different (quiet and verbose are available options).

I notice you're running on a raspberry pi. What version of node are you running? (node -v). Also, syncing will take longer than normal on a raspberry pi due to its limited resources, however, it still shouldn't take that long.

Another issue might be that SQLite3 isn't installed. I'll need to add a check in the program if that is the case, but in the meantime, see if sudo apt-get install sqlite3 fixes the issue. This should also make the sqlite3 command available for you to take a look at the database file (database file should be located in ~/.cache/clouddrive-node/EMAIL.db).

One last issue which I'm not sure if I've solved yet is the cache directory sometimes is not present on machines and I need to change the program to create it if it doesn't exist. So make sure that ~/.cache actually exists.

kpabba commented 8 years ago

Happy New Year Alex, hope the new year brings some luck to me on this :)

I haven't increased the master branch's version number yet, however, you can tell it is a newer version as the help screen output is slightly different (quiet and verbose are available options).

Okay, I still don't see the quiet and verbose in the help screen after the new installation (New installation can be seen in my previous post). Below is the help page I get now.

pi@raspberrypi:~ $ clouddrive -h

  Usage: clouddrive [options] [command]

  Commands:

    cat [options] <path>             Output contents of remote file to STDOUT
    clearcache                       Clear the local cache
    config [options] [key] [value]   Read, write, and remove config options
    download [options] <src> [dest]  Download remote file or folder to specified local path
    du [options] [path]              Display the disk usage (recursively) for the specified node
    find [options] [query]           Find nodes that match a name (partials acceptable)
    info                             Show Cloud Drive account info
    init                             Initialize and authorize with Amazon Cloud Drive
    link [options] [path]            Generate a temporary, pre-authenticated download link
    ls [options] [path]              List all remote nodes belonging to a specified node
    metadata [options] [path]        Retrieve metadata of a node by its path
    mkdir <path>                     Create a remote directory path (recursively)
    mv [options] <path> [new_path]   Move a remote node to a new directory
    pending [options]                List the nodes that have a status of "PENDING"
    quota                            Show Cloud Drive account quota
    rename [options] <path> <name>   Rename a remote node
    resolve <id>                     Return the remote path of a node by its ID
    restore [options] <path>         Restore a remote node from the trash
    rm [options] <path>              Move a remote Node to the trash
    sync                             Sync the local cache with Amazon Cloud Drive
    trash [options]                  List all nodes in the trash
    tree [options] [path]            Print directory tree of the given node
    upload [options] <src> [dest]    Upload local file or folder to remote directory
    usage                            Show Cloud Drive account usage
    *                              

  Options:

    -h, --help     output usage information
    -V, --version  output the version number

I notice you're running on a raspberry pi. What version of node are you running? (node -v). Also, syncing will take longer than normal on a raspberry pi due to its limited resources, however, it still shouldn't take that long.

I'm using a pi2, yes it's a bit slow but however, as you mentioned, I believe it should be that slow to not finish the sync in 48 hrs :) node and npm details are below.

pi@raspberrypi:~ $ npm version
{ npm: '3.3.12',
  ares: '1.10.1-DEV',
  http_parser: '2.6.0',
  icu: '56.1',
  modules: '47',
  node: '5.2.0',
  openssl: '1.0.2e',
  uv: '1.7.5',
  v8: '4.6.85.31',
  zlib: '1.2.8' }
pi@raspberrypi:~ $ node -v
v5.2.0

Another issue might be that SQLite3 isn't installed. I'll need to add a check in the program if that is the case, but in the meantime, see if sudo apt-get install sqlite3 fixes the issue. This should also make the sqlite3 command available for you to take a look at the database file (database file should be located in ~/.cache/clouddrive-node/EMAIL.db).

Since the sqlite3 command was not working, I installed it as suggested and below is the information from email.db.

pi@raspberrypi:~/.cache/clouddrive-node $ pwd
/home/pi/.cache/clouddrive-node
pi@raspberrypi:~/.cache/clouddrive-node $ ls
config.json  kranthi.pabba@gmail.com.db  kranthi.pabba@gmail.com.db-journal
pi@raspberrypi:~/.cache/clouddrive-node $ sqlite3
SQLite version 3.8.7.1 2014-10-29 13:59:56
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open /home/pi/.cache/clouddrive-node/kranthi.pabba@gmail.com.db
sqlite> select count(*) from nodes;
132057
sqlite> 

So, what should be my next steps :)

alex-phillips commented 8 years ago

@kpabba Sorry, I was incorrect in telling you to use the master. The current branch I'm working on is the feature/es6 branch. So use: wget https://github.com/alex-phillips/node-clouddrive/archive/feature/es6.zip instead.

Uninstall and then reinstall this version and see if things get better!

kpabba commented 8 years ago

No problem. Thanks for the response. Still error.

pi@raspberrypi:~/Downloads/node-clouddrive-feature-es6 $ clouddrive sync Syncing... ⠸buffer.js:401 throw new Error('toString failed'); ^ Error: toString failed at Buffer.toString (buffer.js:401:11) at BufferList.toString (/usr/local/lib/node_modules/clouddrive/node_modules/bl/bl.js:155:33) at Request. (/usr/local/lib/node_modules/clouddrive/node_modules/request/request.js:1013:32) at emitOne (events.js:83:20) at Request.emit (events.js:170:7) at Gunzip. (/usr/local/lib/node_modules/clouddrive/node_modules/request/request.js:962:12) at emitNone (events.js:73:20) at Gunzip.emit (events.js:167:7) at endReadableNT (_stream_readable.js:906:12) at nextTickCallbackWith2Args (node.js:455:9) at process._tickCallback (node.js:369:17)

alex-phillips commented 8 years ago

@kpabba I've JUST fixed that buffer issue. Pushing up a fix to the feature/es6 branch now. Pull down the updated version in about 10 minutes and try again.

kpabba commented 8 years ago

Ok, did that. It's still runnnig after 1 hours

clouddrive sync --verbose=2 Syncing... !

alex-phillips commented 8 years ago

@kpabba I think I've found the issue. Working to resolve it now. Will let you know when to give it another shot. Thanks for sticking with me on this issue!

alex-phillips commented 8 years ago

@kpabba Ok, I BELIEVE the issue is that you have a node that does not have a name. This should not be possible as the API documentation requires a node to have a name.

I've added better error handling to tell us if this is the problem. Update your checked out branch and run sync (you might want to run clearcache to start over) and let me know what the error message is that you get.

kpabba commented 8 years ago

Hi, Uninstalled, reinstalled from es6 branch, cleared cache but still same issue. Just says Syncing.

kpabba commented 8 years ago

Oh, it did finish after an hour this time:)

I'm uploading the files now. One last question would be, once I'm done with the initial load, what would be the easy way to do the deltas? I don't see any option like "sync" for data. I'll only need to upload any new files add to a directory on my pi to amazon, not the other way around.

kpabba commented 8 years ago

Another issue I noticed while doing the 1st upload. Node **PhotoLibrary* didn't exist on ACD before I issued the below command.

pi@raspberrypi:~/.cache/clouddrive-node $ clouddrive upload /media/dir300/UseMe/mirror/PhotosLibrary/ UseMe/mirror/
pi@raspberrypi:~/.cache/clouddrive-node $ clouddrive upload /media/dir300/UseMe/mirror/PhotosLibrary/ UseMe/mirror/
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/._database' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary'
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Apple TV Photo Database' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache'
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Photo Database' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache'
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.data' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary'
>>>>>Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.iPhoto': Nodes existing with the same MD5 at other locations: in-5IzRjS-ekLUFHzYVRdw
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library6.iPhoto' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary'
..............

From the above, not sure why it errors for Library.iPhoto saying a node already exists on ACD already. The actual content of the node PhotosLibrary on my pi is as below. I checked ACD and that node doesn't exist there.

pi@raspberrypi:/media/dir1500/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary $ ls -ltr
total 1099
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:31 Apple TV Photo Cache
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:31 Attachments
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:31 Masks
-rwxrwxrwx 1 pkranthi pkranthi      12 Dec 26 12:32 Library6.iPhoto
-rwxrwxrwx 1 pkranthi pkranthi      22 Dec 26 12:32 Library.iPhoto
-rwxrwxrwx 1 pkranthi pkranthi      22 Dec 26 12:32 Library.data
-rwxrwxrwx 1 pkranthi pkranthi   12288 Dec 26 12:32 iPhotoMain.db
-rwxrwxrwx 1 pkranthi pkranthi       1 Dec 26 12:32 iPhotoLock.data
-rwxrwxrwx 1 pkranthi pkranthi    4096 Dec 26 12:32 iPhotoAux.db
drwxrwxrwx 1 pkranthi pkranthi   73728 Dec 26 12:32 iPhoto Selection
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:42 Plugins
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:44 Previews
-rwxrwxrwx 1 pkranthi pkranthi     341 Dec 26 12:44 ProjectDBVersion.plist
-rwxrwxrwx 1 pkranthi pkranthi 1015808 Dec 26 12:44 Projects.db
-rwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:44 repairOnLaunch
drwxrwxrwx 1 pkranthi pkranthi    4096 Dec 26 12:44 private
drwxrwxrwx 1 pkranthi pkranthi       0 Dec 26 12:49 resources
drwxrwxrwx 1 pkranthi pkranthi       0 Jan  1 19:43 Masters
drwxrwxrwx 1 pkranthi pkranthi    4096 Jan  1 19:44 Thumbnails
drwxrwxrwx 1 pkranthi pkranthi    8192 Jan  3 16:44 database

Thanks

alex-phillips commented 8 years ago

It may have just taken more time due to the pi's limited resources. Glad it finished though!

The initial commit is always the longest. Afterwards, any time you run the sync command, it should only get the deltas since the last commit. The program stores a "checkpoint" for your account locally and uses that in the sync request. For example, if it just finished syncing and you ran clouddrive sync again, it would done almost instantly since there were no changes to fetch.

The error you saw in the above post is because by default Amazon throws an error if a file of the same MD5 exists. The error you saw is telling you that the file /media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.iPhoto already exists with the ID in-5IzRjS-ekLUFHzYVRdw. (You can use clouddrive resolve in-5IzRjS-ekLUFHzYVRdw to find out where it actually is).

If you want to override this, there is a config settings I've added to allow dupes. clouddrive config upload.duplicates true.

Let me know if that works for you.

kpabba commented 8 years ago

Thanks.

For some reason upload.duplicates true isn't working? Library.iPhoto still complains.

pi@raspberrypi:~ $ clouddrive upload /media/dir300/UseMe/mirror/PhotosLibrary/ UseMe/mirror/
Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/._database': File 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/._database' exists and is identical to local copy
Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Apple TV Photo Database': File 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Apple TV Photo Database' exists and is identical to local copy
Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Photo Database': File 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Apple TV Photo Cache/Photo Database' exists and is identical to local copy
*****Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.data': File 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.data' exists and is identical to local copy
Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.iPhoto': Nodes existing with the same MD5 at other locations: zr-Go_WgRqmIz-ntNpg5FA*****
Failed to upload file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library6.iPhoto': File 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library6.iPhoto' exists and is identical to local copy
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/01/22/20130122-223056/IMG_0421.JPG' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/01/22/20130122-223056'
Uploading '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/01/22/20130122-223056/IMG_0423.JPG'
100%[====================] 1.38MB/s eta 0.0s (3638873 / 3637844 bytes)
^Cpi@raspberrypi:~ $ clouddrivconfig
email                = xxxxx@xxxx.com
client-id            = 
client-secret        = 
json.pretty          = false
upload.duplicates    = true
upload.retryAttempts = 1
database.driver      = sqlite
database.host        = 127.0.0.1
database.database    = clouddrive
database.username    = xxxx
database.password    = 
show.trash           = true
show.pending         = true
pi@raspberrypi:~ $ clouddrive resolve zr-Go_WgRqmIz-ntNpg5FA
UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Library.data
pi@raspberrypi:~ $ 
alex-phillips commented 8 years ago

@kpabba Update now. During the refactor to ES6 standards, it seems that option got lost in the mix. It should be readded and working now.

kpabba commented 8 years ago

Okay, this time it went a lil further but failed eventually with below.

.....................
Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/07/29/20130729-092322/IMG_1612.JPG' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/07/29/20130729-092322'
**connect ECONNREFUSED 54.210.132.192:443**
pi@raspberrypi:~/Downloads/node-clouddrive-feature-es6 $ 
alex-phillips commented 8 years ago

I got that error once today as well. Was a first. I think that it's an error on Amazon's end as when I tried again it didn't happen again.

kpabba commented 8 years ago

Hi, Tried the upload again, this time, it failed with the below error.

Successfully uploaded file '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/08/07/20130807-194232/DSC00322.JPG' to 'UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/08/07/20130807-194232'
Uploading '/media/dir300/UseMe/mirror/PhotosLibrary/Photos Library 9.55.20 PM.photoslibrary/Masters/2013/08/07/20130807-194232/DSC00323.JPG'

/usr/local/lib/node_modules/clouddrive/lib/Account.js:470
        return callback(err);
               ^

TypeError: callback is not a function
    at Request._callback (/usr/local/lib/node_modules/clouddrive/lib/Account.js:470:16)
    at self.callback (/usr/local/lib/node_modules/clouddrive/node_modules/request/request.js:198:22)
    at emitOne (events.js:78:13)
    at Request.emit (events.js:170:7)
    at Gunzip.<anonymous> (/usr/local/lib/node_modules/clouddrive/node_modules/request/request.js:965:12)
    at emitOne (events.js:83:20)
    at Gunzip.emit (events.js:170:7)
    at Zlib._handle.onerror (zlib.js:367:10)
pi@raspberrypi:~/Downloads/node-clouddrive-feature-es6 $ 
alex-phillips commented 8 years ago

@kpabba Ok, that callback error should be fixed now. Hopefully this is the last error we run into for a while :-P.

I do apologize for all these issues, but I guess that's why this isn't a release yet. I do appreciate you helping me work all these bugs out!

alex-phillips commented 8 years ago

@kpabba everything working for you?

kpabba commented 8 years ago

It failed again with a duplicate file name issue and I moved on to some other issue on my pi. Will let you know once I come back and try this again.

Thanks for all your help on this Alex.

alex-phillips commented 8 years ago

@kpabba any update on this? Is this still an issue?