Goal: Create a mostly "offline" experience by aggregating project modifications and only saving these to the database at specific times, firing off all operations as an ordered batch update
Original Solution:
Implement java-side object caching (https://github.com/mozilla/webmaker-android/pull/2414) so that edits can be made to pages and elements without needing to synchronize with the database every time a webview is swapped in or out.
Rather than syncing with the database, objects that need to be shared between multiple webviews (effectively: different React applications running in different "tabs") can be cached to java when one view swaps out, and then be retrieved from cache when another view is swapped in, bypassing the need for an external database until the user leaves the page view, at which point all pending modifications for that page get synchronised with the database using a single HTTP call that contains a batch of sequential updates.
updates to page.jsx to cache to-be-edited elements, as well as link elements for which "set destination" was clicked,
updates to element.jsx to retrieve to-edit elements from cache, and recache after edits are complete, when entering tinker mode, or when triggering a "set destination" action for link elements,
updates to tinker.jsx to retrieve the in-edit element from cache and recache it once edits are complete
updates to project.jsx to retrieve link element information from cache, commit link destination information once the user picks a page as destination, and recache when that choice is committed
With this in place, modifications need only be synchronized with the database when a page view is exited. This is, in fact, still an incomplete solution, and the modifications that are necessary to make this synchronisation only occur when a project is considered "done" (or needing to be saved by the user) have not yet been put in place, instead having been scheduled as follow up (https://github.com/mozilla/webmaker-core/issues/452, with https://github.com/mozilla/webmaker-core/issues/451 as a clean-up followup)
Problems encountered:
There are a lot of view changes, almost all of which were discovered as the code got updated to take advantage of java-side caching. We have no architecture or application logic documentation that allowed a quick inventory of how many parts of the code would be involved, and how much work would be needed in each place.
The changes span three repositories, which made getting up and running for testing code changes a challenge, as well as keeping all three in sync as work progress independently on each repository.
Fully syncing code is relatively clean, fully offline should be relatively clean, but working with a progressively more offline combination of the two made it hard to write code that worked well enough to test without constantly revealing new places things were now broken. Which we ran into a lot.
Working on a branch is like going on vacation: when you're done, someone may have remodeled your house. Or if your house is on github street, landed many changes that necessitate rebase upon rebase, slowing down the process. Sometimes this is unavoidable, but then everyone should be okay with what is effectively a code freeze, or at least a chill, if not a full freeze.
We have no automated testing in place to take some of the burden off our hands and move it to CI or even local testing
Retrospective:
A preliminary code audit to determine what would need to change, with team feedback on that list to figure out all the fine details.
Architecture and application logic documentation (as boring as diagrams are, they are extremely valuable to see where things happen, involving which other parts of the code)
Noting that this was bigger than expected on day 3, and calling a mofo-eng council meeting to discuss what to do.
iterate over the keys of the action, to determine if values need to be resolved using values from previously processed actions (http://git.io/vOXEZ && http://git.io/vOXE1)
if an error occurs, rollback the transaction, return details of the error
select the appropriate stored query to execute for the current action (http://git.io/vOXuZ)
if the action depends on a resource that is not part of this transaction, use the lookup values to find it and validate permissions (http://git.io/vOXgK)
push the raw query results and formatted results onto the reduced result array, and return it to the reduce function for the next iteration (http://git.io/vOX2c)
Once finished reducing the actions array into a results array, commit the transaction (http://git.io/vOX2y)
analyze the actions done and create a series of cache invalidation tails appropriate for the actions done.
Refactor API to use a document based data store where we can merge new versions of projects fairly effortlessly (VS relational DB style of tables linked via foreign keys)
Refactor the code:
Postgres code could be exposed to outside modules, allowing us to move logic out of the lib/postgres.js file and into more specific modules.
refactor the bulk endpoint code to be less reliant on scoped variables to reduce complexity
Create documentation for the possible actions that can be sent to the bulk API, as well as the placeholder format (+result set properties available)
Land and user-test. Where are the gaps, even after we tested it a lot?
Follow up to turn the entire project creation an offline-first experience, except where network is necessarily required (such as getting images from the web etc).
Evaluate what part of our architecture made this process so intense, and whether we need to spend any time, and if so, how much, on rearchitecting the application logic and codebase.
What have we learned already
We have a complex application, but no book that covers why it does what it does, where it does that, and what is critical to keep in mind when changing code.
It's so very, very tempting to just keep banging on the code until it works. But that doesn't help the team understand what's going on, and why things take so long.
Front-end
Goal: Create a mostly "offline" experience by aggregating project modifications and only saving these to the database at specific times, firing off all operations as an ordered batch update
Original Solution:
Implement java-side object caching (https://github.com/mozilla/webmaker-android/pull/2414) so that edits can be made to pages and elements without needing to synchronize with the database every time a webview is swapped in or out.
Rather than syncing with the database, objects that need to be shared between multiple webviews (effectively: different React applications running in different "tabs") can be cached to java when one view swaps out, and then be retrieved from cache when another view is swapped in, bypassing the need for an external database until the user leaves the
page
view, at which point all pending modifications for that page get synchronised with the database using a single HTTP call that contains a batch of sequential updates.This touches quite a few aspects of the code, documented in https://github.com/mozilla/webmaker-core/wiki/Mostly-offline-java-caching-diagrams but summarized here:
With this in place, modifications need only be synchronized with the database when a page view is exited. This is, in fact, still an incomplete solution, and the modifications that are necessary to make this synchronisation only occur when a project is considered "done" (or needing to be saved by the user) have not yet been put in place, instead having been scheduled as follow up (https://github.com/mozilla/webmaker-core/issues/452, with https://github.com/mozilla/webmaker-core/issues/451 as a clean-up followup)
Problems encountered:
Retrospective:
API
Goal: Support front-end requirements to limit network requests to a minimum
Original Solution:
Given an array of actions, reduce into an array of results http://git.io/vOX0w && http://bit.ly/1M5h5AO
(I tried to do the nested list justice in MD, but I think I failed. This further illustrates some of the problems outlined below) - Simon
Problems: