Closed rfk closed 4 years ago
/cc @linacambridge @mhammond @pjenvey @jrconlin
(dammit, I meant to open this as a draft PR; hopefully it goes without saying, but please don't actually merge this to master...)
Update: I deployed this in a really janky and unreliable way to a public address here:
Note that it's http-only, not https. Don't expect too much from it, but do ping me if you try to use it for something and find that it's crashed out :-)
I filed a separate bug to track actually using this to verify client behavior, over in a-s where we can see in our regular planning process: https://github.com/mozilla/application-services/issues/2486
This served its purpose, closing.
Following our meeting earlier today, I hacked up a really simpler server to help with testing client behaviour during a migration. The diff in this PR is pretty useless, you might prefer to just check out the
migration-testing
branch or browse its contents directly here: migration testing server branch.The idea is that you can run this server locally using
make server
on http://localhost:5000, and you can point a sync client at it by setting the tokenserver URL to http://localhost:5000/token/1.0/sync/1.5 just like you would for testing a local self-hosting setup.However! This server is very hacky. It doesn't check any authentication credentials, and by default it will assign all clients to a storage node at http://localhost:5000/storage/1.5/1 (that is, with a fixed sync uid of "1"). So you probably don't want to use multiple accounts with this server at the same time, although multiple clients syncing to the same account should work fine.
The fun bit is when you visit http://localhost:5000/ in your browser. You will see an incredibly bare-bones management interface with three buttons.
If you click the "Begin migration" button, the server will start throwing 503s from the storage node, simulating the short-lived sync outage that we plan to give to users while moving their data on the backend. Try clicking this button and then syncing your client, and check that it behaves sensibly (and doesn't succeed in syncing anything)
Next, if you click the "Complete migration" button, the server will pretend that it has migrated your data to a new node by telling all clients that their sync storage node is now at http://localhost:5000/storage/1.5/2 (that is, with a fixed sync uid of "2"). It gives clients a 401 if they continue trying to sync with uid "1", to encourage them to discover their new storage endpoint. And it serves whatever data was previously uploaded under uid "1" under the new uid "2", just as as we'd moved the data on the backend.
Finally, you can click the "Reset" button to wipe all the stored data and put the server back in its original state.
I tried some basic experiments with this server and Desktop Firefox, and it behaved as we expected in the meeting, just picking up its syncing where it had left off after experiencing a brief migration outage. I haven't tried other browsers yet.
If this seems useful, I can try to stand up a persistent instance of this server on a public-facing URL to make it easier to test e.g. mobile clients that can be fiddly to connect to servers running on localhost.