The labscript Python library provides a translation from simple Python code to complex hardware instructions. The library is used to construct a "connection table" containing information about what hardware is being used and how it is interconnected. Devices described in this connection table can then have their outputs set by using a range of functions, including arbitrary ramps.
Changes to labscript that contain unexpected bugs, risk breaking people's experiments. Worse, a mistake could adversely affect the ability to obtain scientific results, or even invalidate publications. Corner cases are often hard to catch during testing, and may persist for years. To combat this, we should implement a test suite.
I suggest that we expose the runviewer Shot class in an API and use the traces (that are reverse engineered from the hardware instructions stored in the hdf5 file) to verify that outputs still maintain the same behaviour after labscript changes.
We should also:
have something in the test suite that detects ramps, and warns (but not fails) the test if the clock ticks (and thus the ramp evaluation points) have changed slightly.
Determines if part of the trace is just shifted in time (so that we can determine if the shift is expected - e.g. because we increased the trigger time when fixing a bug)
Create a comprehensive test shot that uses all standard hardware, along with expected traces (generated by hand, so that the test does not fail (or worse, pass) due to a mistake in the runviewer API) to be used before merging pull requests.
use mercurial to pull the expected behaviour of past versions for comparison with the expected behaviour of the current version
Original report (archived issue) by Philip Starkey (Bitbucket: pstarkey, GitHub: pstarkey).
Changes to labscript that contain unexpected bugs, risk breaking people's experiments. Worse, a mistake could adversely affect the ability to obtain scientific results, or even invalidate publications. Corner cases are often hard to catch during testing, and may persist for years. To combat this, we should implement a test suite.
I suggest that we expose the runviewer
Shot
class in an API and use the traces (that are reverse engineered from the hardware instructions stored in the hdf5 file) to verify that outputs still maintain the same behaviour after labscript changes.We should also: