Closed iandobbie closed 2 years ago
@ChrisChong8 asked me about this issue in email, brought it here to keep a record.
First thing is what metadata it is sensible to have. first guess is: PixelSize - currently this is on the objective, but what happens if you have different cameras? stageposition - current stage position excitation wavelength(s) - what light sources were active when image was taken. Camera ROI Camera binning
Slight complication is that some of this information should probably be cached during experiments, as we don't want additional calls, especially over the network, while an experiment is running.
I would start by creating a dict with this information in it and having it populated on a new image event so long as an experiment isn't running. I think the place to have this is on the camera object. This would then be in cockpit/handlers/camera.py. You can find out how to find the relevant metadata in the cockpit/experiment/datasaver.py which is the code which saves image data during experiments.
Note that what metadata you can store is limited by our code, it is also limited by what the file supports. At the moment, we only support the mrc file format. The mrc file has no support for different pixel sizes between channels, ROI, or Binning (that information is often available on dv.log but that is more a human readable file than meant for machine reading. That said, bioformats will read the dv.log).
Yes but we should have this data available even if we can't currently store it. Different pixel sizes is a major issue as although this is supported by the tiff standard I think it is not possible in dv. Even in tiff I bet most software will not cope with this.
One other comment to add to this. We are bound to softworx until we can do 3D SIM reconstruction with something else, and softworx binds us to the dv file format.
Added pixel size and emwavelength to cameraview objects and then include them in saved data if they exist.
It would be very nice to have additional metadata for each image window. Suggestions include Pixel size excitation wavelengths (emission is already there) Stage position
This would allow the return of the scale bar, transferring data to the mosaic to put i in the correct place rather than the current stage position and saving of a snapped image with semi reasonable meta data.