Currently, when tags are imported, their slices and contours are saved to the disk.
When sliders should be used meaningfully, the set of imported images should be large. This consumes a lot of disk space (up to 10mb per image).
Instead, the tagging could be completely moved inside the annotation process (also because it is fast). Instead of saving the whole image, the non white pixels could be saved as a flattened array separated by whitespace. From this, contours could be extracted with ease, together with this the box margin is saved to make it easy to re-construct the tag
[x] non white pixels should be saved in detector to Tagger
[x] removed axis titles (re-do)
[x] in show_tag replace method extrafileobjects
[x] in manual_tag replace part where fileobjects are saved
[x] draw contour on slice is not used any longer
[x] saving tags to database saves the wrong image!!
Why would this be necessary?
Currently, when tags are imported, their slices and contours are saved to the disk. When sliders should be used meaningfully, the set of imported images should be large. This consumes a lot of disk space (up to 10mb per image).
Instead, the tagging could be completely moved inside the annotation process (also because it is fast). Instead of saving the whole image, the non white pixels could be saved as a flattened array separated by whitespace. From this, contours could be extracted with ease, together with this the box margin is saved to make it easy to re-construct the tag