farmOS / field-kit

A modular, offline-first companion app to farmOS.
https://farmOS.org
GNU General Public License v3.0
60 stars 39 forks source link

Log API: Syncing #384

Closed jgaehring closed 4 years ago

jgaehring commented 4 years ago

I think the main objective here is to generalize our existing sync actions so they're no longer specific to the "My Logs" module. The biggest part of that will be accepting parameters for specifically which logs to sync, rather than using the filters object that modules currently provide. I think those parameters can be modeled on the filters object, which is itself closely modeled on the farmOS.js API. In this sense, the issue is closely related to #367.

This issue also entails eliminating the syncing metadata from log objects, namely the isReadyToSync and wasPushedToServer properties. I think those make a lot of assumptions about the UX which are specific to "My Logs" and so that state should be handled separately by "My Logs" itself, rather than loading it onto the global state.

I want to be mindful in the development of this API of how this work can be a model for the assets and areas API's, though I should be wary of premature abstractions.

jgaehring commented 4 years ago

This issue also entails eliminating the syncing metadata from log objects, namely the isReadyToSync and wasPushedToServer properties.

I'll actually want to replace wasPushedToServer with lastSync. That will also take the place of the global syncDate we're currently storing in localStorage for all logs. It's some metadata I think we have to hold on to, unfortunately, for how crucial it is to the merging process, but it will provide us with much greater granularity when it comes to reconciling changes. It will free us to sync logs individually (#210) and also tell us whether each log's property has been synced and when, rather than merely if the whole log wasPushedToServer. It will also help us resolve #349, as I described there.

jgaehring commented 4 years ago

I'm starting to think about the component procedures of syncing and caching as a chain of stepped dependencies, where you have to step through each lower order procedure in order to perform the next order procedure:

  1. filterStoreLogs: Removes logs that don't match the criteria from the store.
  2. loadCachedLogs: Searches IDB for all logs that match criteria; adds them to the store; depends on 1 so any subsequent operations are not performed on the wrong logs.
  3. getRemoteLogs: Fetches logs from the server that match criteria; adds them to the store; depends on 2 so that local logs can be matched with their corresponding remote copies and updated without creating a duplicate.
  4. sendRemoteLogs: Sends updates of logs to the server; depends on 3 so that updates that occurred on the server are merged and not overwritten.

So if you make call to getRemoteLogs, that has to be preceded by filterStoreLogs and loadCachedLogs; if you make a call to sendRemoteLogs, that has to be preceded by calls to all three of the lower order procedures. Or another way to look at it is each new level is as a composition of the lower level procedures plus some new procedure.

In a similar way, parameters required for each higher order procedure must satisfy the parameters of all lower order procedures. I'm still trying to flesh out the consequences of this last observation, because I'm not sure if that means if a higher order procedure adds to the parameters? For instance, the loadCachedLogs procedure can't really accept a list of localIDs, because before the logs are loaded in memory we can't know what those locaIDs are; however, getRemoteLogs can have such foreknowledge and so can add a list of localIDs on top of the filters that loadCacehdLogs needs. But then does that open us up to inadvertently creating duplicates? I don't think so, because if any of those logs included in localIDs meet the filters criteria, they'll have been loaded into the store, and if they don't, then the won't be cached anyways, right? :thinking:

jgaehring commented 4 years ago

Hmm, rethinking the above list. I feel like maybe filterStoreLogs doesn't belong in the dependency chain. I don't see why we'd need to clear other logs from the store in order to get or send remote logs; it's really just a matter of tidying up after another module has left logs in the store so you can start fresh, w/o throwing out the ones you still might need.

I'm also not sure if loadCachedLogs really needs to be called when sending logs to the server. I'm trying to think whether it could potentially cause conflicts, create duplicates or lose data if a module just right off tried to call syncLogs without ever calling loadLogs, if syncLogs doesn't include theloadCachedLogs procedure. I guess it wouldn't be a problem if a module only included the localIDs parameter and not the filters parameter, but if the module was calling syncLogs as a way to also retrieve other logs from the server, which it wasn't sending, then those logs could potentially still lie in cache, without having been loaded into the store, and that could create duplicates. I guess we could conditionally call loadCachedLogs from syncLogs only if the filters parameter was present, but that's neither here nor there.

So I think I'm zeroing in on 3 possible compositions of lower level procedures:

loadLogs === filterStoreLogs + loadCachedLogs
getLogs === loadCachedLogs + getRemoteLogs
syncLogs === loadCachedLogs + getRemoteLogs + sendRemoteLogs

The filterStoreLogs and loadCachedLogs procedures could be public Vuex actions, available to any module that knew about them, because they don't really require the other procedures, but the getRemoteLogs and sendRemoteLogs procedures would be private functions only available to FK core.

jgaehring commented 4 years ago

So I think I'm zeroing in on 3 possible compositions of lower level procedures

Here's what those Vuex actions look like inside farmModule:

    loadLogs({ commit, dispatch }, { filters, localIDs }) {
      const query = createQuery(filters, localIDs);
      commit('filterLogs', query);
      return dispatch('loadCachedLogs', query);
    },
    getLogs(context, payload) {
      return context.dispatch('loadCachedLogs', payload)
        .then(() => getRemoteLogs(context, payload))
        .then(() => { localStorage.setItem('syncDate', Math.floor(Date.now() / 1000)); });
    },
    syncLogs(context, payload) {
      return context.dispatch('loadCachedLogs', payload)
        .then(() => getRemoteLogs(context, payload))
        .then(() => sendRemoteLogs(context, payload))
        .then(() => { localStorage.setItem('syncDate', Math.floor(Date.now() / 1000)); });
    },
jgaehring commented 4 years ago

I think this is pretty much done, with the actions mentioned in the last comment and the subsequent reorganization of the payload into filter and pass properties. Just need to wrap up documenting everything and updating tests and I think we're good to go for both this and #385!