Currently the import/export is a full database sync (ie. converts the entire db to a .json file and pushes it to the server, or pulls a .json file from the server and completely replaces the entire db). Slow, and unnecessary.
It was done this way because:
a) Sqlite3 rowids can change (http://www.sqlabs.com/blog/?p=51)
b) We'd need to explicitly track deletions, so that they can be removed on the other end
At some point, we should revisit this decision, as it would be far more efficient to only replicate added/changed/deleted records rather than everything everytime.
It would also mean that we could potentially have a background sync (in a webworker) that detects when a wifi connection is available (if that's actually possible), and automatically synchronise any pending changes, a la Evernote (potentially using websockets?).
We would need to redefine our table keys as INTEGER PRIMARY KEY AUTOINCREMENT to prevent the values from changing.
Currently the import/export is a full database sync (ie. converts the entire db to a .json file and pushes it to the server, or pulls a .json file from the server and completely replaces the entire db). Slow, and unnecessary.
It was done this way because: a) Sqlite3 rowids can change (http://www.sqlabs.com/blog/?p=51) b) We'd need to explicitly track deletions, so that they can be removed on the other end
At some point, we should revisit this decision, as it would be far more efficient to only replicate added/changed/deleted records rather than everything everytime.
It would also mean that we could potentially have a background sync (in a webworker) that detects when a wifi connection is available (if that's actually possible), and automatically synchronise any pending changes, a la Evernote (potentially using websockets?).
We would need to redefine our table keys as INTEGER PRIMARY KEY AUTOINCREMENT to prevent the values from changing.