It should not be super hard to write a script that:
grabs various images stacks under different conditions
performs one or more tracking workflows
extracts individual tracks from the data, and crops volumes to display only those tracks
presents these tracks in random order to a user, who can explore the track in 3D+t then simply has to press e.g. y/n for whether the track is accurate
This would actually be much easier to achieve than a general track editing interface, and would immediately allow us to estimate our tracking accuracy.
Over time, we can do more elaborate things like also estimating the “false negative” rate, ie how often a track ends when actually we can see the platelet in the next/previous frame.
Nick has example code for binding image annotations to keys here:
This was done by @AbigailMcGovern in #15 and #21. There is still some cleanup to be done but I consider this issue closed. We can make a new issue for cleaning up the scripts and UI.
It should not be super hard to write a script that:
This would actually be much easier to achieve than a general track editing interface, and would immediately allow us to estimate our tracking accuracy.
Over time, we can do more elaborate things like also estimating the “false negative” rate, ie how often a track ends when actually we can see the platelet in the next/previous frame.
Nick has example code for binding image annotations to keys here:
https://github.com/sofroniewn/image-demos/blob/master/examples/keiser_tiles.py
so this issue is all about the finicky extraction of 3D+t volumes, randomisation, etc, but there’s no show-stoppers there.