Closed kmgill closed 1 year ago
See branch dng_experiment
in both this and sciimg
for work on this
Most recent change (on kmgill_develop
branch) updates how images are saved. Out of the box, there is no visible change to the user: PNG is still the default. However if you set the environment variable MARS_OUTPUT_FORMAT
to one of png
, jpg
, tif
, & dng
, that format will be picked up an used.
Regression: Output from calibrate subcommand for ChemCam failing due to invalid check for ability to export RGBA data
Great to see you started working on this! If I read your changes correctly, your dng implementation is only a plain file format alternative to png so far, without writing any of the dng specific tags for color calibration, sensor layout and general Exif data? Without these, the DNG isn't very useful and won't be readable by Lightroom/Photoshop.
Here is the official DNG specification that details all the available tags: https://helpx.adobe.com/content/dam/help/en/photoshop/pdf/dng_spec_1_6_0_0.pdf Here is the documentation for tags in the Rust implementation: https://docs.rs/dng/latest/dng/tags/index.html
Some of these tags need values that are already public, but not contained in the JSON metadata like FocalLength
, Aperture
, CFAPattern
, etc.
Another important kind of tags are the color calibration tags like ForwardMatrix1
, ColorMatrix1
, ShotAsNeutral
which will supply the image editing program with all necessary parameters so it can interpret the image colors correctly. The values for these Tags are not published by the mission so far but I derived my own from the publicly available spectral response functions.
It would be important to support all these tags and possibly read the values from a new parameter file that is specified in the calibration data folder. That way a user can supply their own parameters if public values are not available yet.
If no color calibration parameters are supplied MRU could skip the color calibration part by setting the PhotometricInterpretation
tag to RGB
, that way the raw sensor values will just be interpreted as sRGB. However this will also skip a lot of the advantages of the DNG format.
If the color calibration parameters are supplied, the PhotometricInterpretation
can be set either to LinearRAW
(already debayered) or CFA
(raw bayered data, will be debayered by the image editing program). This is where DNG really starts to shine because programs like Lightroom or RawTherapee will apply the color calibration non destructively with many options for noise reduction, sharpening, possibly better debayering, changing exposure, etc.
I would be happy to supply placeholder files that would just interpret the images as sRGB and will probably set up my own calibration data repository for parameter files with color calibration.
Yeah, I've only so far gotten the basic format working. Next steps involve the calibration, color, and hardware properties, along with other metadata otherwise in the json label file.
MRU is setting PhotometricInterpretation
to RGB
for debayered images. Remember that the outputs of MRU's calibrate
are already calibrated. If you'd rather let another program perform the calibration, a simplified raw2dng would be more appropriate. If that's what you'd prefer, submit an issue and I'll start planning that out.
To your first point, populating the EXIF metadata of all output images would be really useful. Capture time would probably have to be implemented in UTC format because Mars time isn't supported by these standards. Another useful thing would be (35mm) focal length, so panorama and photogrammetry software would already have a good starting point for image scale which makes these images easier to use.
To your second point, while DNG images traditionally contain the completely raw sensor data, I found that vignette correction using the actual flatfield will give better results than using the DNG FixVignetteRadial
tag to specify a polynomial model of the vignette. First of all, that model would need to be derived first and secondly, it can't correct a color cast like in the one from M20 Navcam because it acts on all color channels the same way.
Color cast visible in the flat: https://github.com/kmgill/mars-raw-utils-data/blob/487cc06424abf3fc407523e8783bc12d8060addb/caldata/FM103_flat_rgb_bias_mask_sf4.jpg
What I found to be working well for the M20 ECAMs, is preprocessing the image in the following way before storing in the DNG:
Then I write this precalibrated image data into the DNG with LinearRaw
interpretation and the accompanying color calibration tags.
For Mastcam-Z I would propose a similar workflow, but try to keep the bayered nature of the image, so other (possibly more advanced) debayering algorithms could be used. For example, I've seen a pretty nice quality improvement with Adobe's AI powered 'Enhance' debayering function, which can produce a lot sharper results than for example AMaZE.
Completed
And implement it in such a way as to make the addition of tif and jpeg output straightforward