I am assuming that _cors_bulk_docs or couch bulkDocs ignores duplicate id's, or at least ones with duplicate _rev's. I'm wondering if we should filter out the non-Assessment data, such as the Assessment forms?
More stuff to do:
[ ] Loop through test/uploads, process data in new - being aware that there are subdirectories - and move finished files/directories to processed. The files in "Backup TZ.txt files look like this:
Arusha from Jane
-- 2.txt
-- backup 3.txt
-- backup4.txt
-- etc.
Mbeya from Jennipher
-- dgwahula.txt
-- kkahava.txt
etc.
Any backups made moving forward will be placed in the other dropbox, RTI-DataVision PILOT backup files.
My proposed solution is the past test in test/spec/process.coffee.
Next step is to upload this to a test couch and see how it is syncing. Proposed steps:
From Utils.uploadCompressed() -
In my current code, I'm not compressing it to Base64 - probably need to add that; not sure.
Then change the destination to this:
I am assuming that _cors_bulk_docs or couch bulkDocs ignores duplicate id's, or at least ones with duplicate _rev's. I'm wondering if we should filter out the non-Assessment data, such as the Assessment forms?
More stuff to do:
Any backups made moving forward will be placed in the other dropbox, RTI-DataVision PILOT backup files.