Currently Murakami runs test runners four times per day at randomized times. Tests are run sequentially, and always in the same order. This is fine for most networks. However, some things have been reported by team members and collaborators that have us considering future enhancements to how Murakami runs and/or coordinates tests:
in Europe some folks see traffic shapers kick in after ~5-10 seconds. For Murakami's sequential tests, this could result in back-to-back tests giving different answers.
an interim fix to this which could generate potentially interesting findings on traffic shaping could be to randomize the order of test runners, and record the sequence of tests for each run.
a different tact could be to space out the test runners so they don't run back-to-back, but pause for X seconds (TBD) between test runners for each randomized test time.
researchers may be interested in different select-able or define-able test schedules, such as running one of the enabled test runners randomly 12x per day, which would be more robust from a statistical point of view.
with the development of the Murakami data dashboard service underway, we have discussed future options for coordinating tests from the dashboard service itself, such as triggering an on-demand test from the Dashboard to run on one or more Murakami test runner devices. This could also enable the Dashboard service to display currently running tests and/or the schedule of upcoming tests.
These cases require rethinking how test runners are managed and coordinated. Instead of having each test runner choose a random time, the remote Dashboard service would provide schedule configuration to each device.
Currently Murakami runs test runners four times per day at randomized times. Tests are run sequentially, and always in the same order. This is fine for most networks. However, some things have been reported by team members and collaborators that have us considering future enhancements to how Murakami runs and/or coordinates tests:
These cases require rethinking how test runners are managed and coordinated. Instead of having each test runner choose a random time, the remote Dashboard service would provide schedule configuration to each device.