AndreGuo / HDRTVDM

The official repo of "Learning a Practical SDR-to-HDRTV Up-conversion using New Dataset and Degradation Models" in CVPR2023.
Mozilla Public License 2.0
42 stars 3 forks source link

Why the SDR images looks better than the HDR images #5

Closed jchhuang closed 4 months ago

jchhuang commented 4 months ago

Hello Dr. Guo, We are interested in your work, however, when we download the datasets and observe the images in the screen with HDR format support, we find that the SDR images looks better than the HDR images. This phenomena is very confusing, may you help to explain. Thanks a lot. For example, the HDR image in the following screenshot looks more dimmer and colorless than the SDR example

AndreGuo commented 4 months ago

HDR/WCG has bigger luminance and color container. So, the corresponding normalized pixel value of the same real luminance and color will appear larger in SDR than HDR/WCG. (e.g. 100nit is Y=1 SDR, while Y≈0.58 in PQ HDR). In this case, if you show these two SDR and HDR/WCG (recording same scene luminance and color) by same SDR display, pixel value Y=1 will be interpreted by 100% luminance display capacity, while Y≈0.58 in HDR will only be interpreted by e.g. 58% capacity and thus appears dimmer. The same goes for color, same scene color will have smaller normalized pixel value in WCG (e.g. BT.2020) than CCG (BT.709/sRGB), so if they are simultaneously displayed by same BT.709/sRGB display, WCG image will appear more desaturated. The HDR/WCG image in HDR-SDR pairs in our dataset will always look dimmer and more desaturated, if interpreted at same display (no matter what display you use, just because HDR/WCG has smaller normalized pixel value). And if you want a correct look, please display them separately by sRGB/SDR and HDR/WCG display.