sillsdev / LfMerge

Send/Receive for languageforge.org
MIT License
2 stars 4 forks source link

Set up end-to-end testing of Send/Receive #339

Closed rmunn closed 2 months ago

rmunn commented 4 months ago

Now that LexBox and Language Forge can be run side by side, it should be possible to set up end-to-end testing of Send/Receive scenarios with a real Language Depot deployment, rather than a simulated one. A typical test might go as follows:

  1. Create a LexBox project, and upload a .zip file containing the initial state of the project.
  2. Do an initial clone into Language Forge, verify correct number of entries and so on.
  3. Use the MkFwData tool to get a .fwdata file for that project.
  4. Use liblcm to load that .fwdata file, then add a new entry or edit an existing entry.
  5. Use the SplitFwData tool (that doesn't exist yet) to split the edited .fwdata file up into its component parts.
  6. Create a new Mercurial commit with those changes.
  7. Push that commit to the LexBox repo. (Steps 3-7 simulate making an edit in FieldWorks followed by a Send/Receive).
  8. Use Mongo to make edits in the Language Forge project (faster), or use Playwright to drive the UI to make those changes (slower).
  9. Have Language Forge trigger a Send/Receive, kicking off LfMerge.
  10. Once LfMerge has finished the Send/Receive, verify that LF's Mongo database contains the changes from steps 2-7 (or the merge result of those changes).
  11. In LexBox, verify that LfMerge created a new commit with the correct results.
  12. Download an .fwdata file from LexBox and load it into liblcm.
  13. Use liblcm to verify that the FieldWorks objects received the correct changes from LfMerge (updated field contents, comments, or whatever).

That's a lot of steps that currently have to be done by hand, but with LexBox and Language Forge running side by side on a developer machine (plus MkFwData and SplitFwData tools becoming available in the flexbridge repo so that the E2E S/R tests can use them), all of that will finally be able to be automated.

rmunn commented 3 months ago

More ideas: a helper class to manage various liblcm-related testing tasks:

Also do various liblcm tasks:

Basically, anything that's a multi-step process of more than 2-3 steps in existing S/R tests should turn into a helper method.

rmunn commented 3 months ago

A thought about repeatability: I could use the NUnit attribute Property on tests to mark them with the project that they use. Then a test fixture could use that value (which can be accessed through the TestContext) to get the current "tip" revision of the project before starting the test, and in the teardown, reset the project to that tip revision so that we remove any commits we pushed.