One of the first steps currently being done is to transform the int DNG from 16-bit UInts to 16-bit floats. However this causes a loss of precision for the higher values. The 16-bit float can't represent a 16-bit integer exactly, it'll only be able to go up to 2^11.
For Raws with 11 or fewer bits this isn't a problem, but for raws with more than 11 (most new cameras) this is cause problems in the highlights, and problems in any number of other areas when doing a bracketed burst.
The easiest solution would be to turn everything to 32-bit floats, but this would double memory usage, and might require changes to how the images are being handled (i.e. currently they're all loaded into memory at once).
One of the first steps currently being done is to transform the int DNG from 16-bit UInts to 16-bit floats. However this causes a loss of precision for the higher values. The 16-bit float can't represent a 16-bit integer exactly, it'll only be able to go up to 2^11.
For Raws with 11 or fewer bits this isn't a problem, but for raws with more than 11 (most new cameras) this is cause problems in the highlights, and problems in any number of other areas when doing a bracketed burst.
The easiest solution would be to turn everything to 32-bit floats, but this would double memory usage, and might require changes to how the images are being handled (i.e. currently they're all loaded into memory at once).