Open lemmi opened 10 months ago
On a quick look, most of these ideas are valid.
I do not consider the page size as dramatic as stated by lemmi, but that does not mean we shouldn't get some of the low-hanging fruits.
Still, somebody will have to invest considerable time in it, and due to the design of the monitoring even "easy" changes might not be so quick after all.
A single initial page load of a router clocks in at about 3MB of data in total. Through compression of the main page, only 1.5MB are actually transferred over the wire. Each reload after that comes in at about 1MB.
There are a couple of low hanging fruit that can be picked for some easy improvements:
precompress assets
use a more appropriate data format for
/api/load_netif_stats/
On each page load, at least one call to
/api/load_netif_stats/
is made. A single requests requires 500kB-600kB. This can easily brought down by providing another api endpoint that serves a different format. A fitting choice could be Apache Parquet. It well supported in multiple languages, especially inpython
andjs
. Just using the included delta encoding brings the size down to 60kB. Additionally enabling compression can further improve this to 50kB at the cost of more overhead. Integration should be very easy. Here is the small test program I used to compare the sizes:split router stats into api
The delivered
html
embeds a huge portion of the stats inline asjavascript
variables here . This is problematic for several reasons.parquet
will yield a size of about 90kB. Additional compression can further improve this to 75kBOnce these or similar changes are made (there might be another file format more suited for example), there is another option to vastly improve the server load and transfer sizes.
caching
Historic data will not change. Therefore there is no reason to keep resending everything. Instead, very deliberate use of caching should be made.
A simple scheme to achieve this could be the following:
More concretely: Rather than performing a single request to
/api/load_netif_stats/XYZ
, the client should instead make multiple requests:Everything except the first request can be heavily cached, potentially forever, on the client. The server also only ever needs to provide data for recent events dynamically and can then generate historic data once.
With this, a page reload should be only as much as 60kB uncompressed, or 7kB (!) compressed for the html, and an additional request for the most recent historic data, which should be in the order a couple of hundred bytes to few kilobytes.