Closed warner closed 3 years ago
I recently learned about the tracing crate, which sounds like it covers most of the data-gathering parts of this design (one-shot Events, start+finish Spans, both with arbitrary key-value attributes). We'd have to write a "Subscriber" to put this data into a file on disk, and then feed to the same rendering tool as above.
Unless a strong motivating factor for this comes up (or a motivated person :p), this ain't gonna happen, sorry.
The Python version has a
--dump-timing=FILE
option which writes out a JSON-formatted file, containing a list of timing events. Each message to and from the server is recorded, along with timing data that the server provides, to give us a sense of round-trip time and whether delays are resulting from computation time in the local client, network travel to the server, turnaround time within the server, network travel from the server to the other client, or computation within the remote client. These JSON files are processed by a tool in https://github.com/warner/magic-wormhole/blob/0.10.5/misc/dump-timing.py and a neighboring single-page web app (using https://d3js.org/) to render a scrollable zoomable timeline browser.The Rust version should have support for emitting these files too. The event-based workflow should make it pretty easy: just add a
Timing
event, emit them in various places, route them to a new tiny machine named "Timing" which gathers them into a vector. Then, add some new API call that fetches the vector and writes it into a JSON file.Eventually the rendering tool should probably be pulled out into a separate repo, and updated (my javascript/web skills are horrible). It might be interesting to record more internal events too, although except for the PAKE computation they'll probably be instantaneous compared to the network delays.