Closed guoyejun closed 4 years ago
Hi. I guess it depends on the format of the HDR video. You need to export the linear RGB HDR frames, for example as OpenEXR images. Then, if you have OpenCV installed with OpenEXR support, the virtualcamera application will be able to read and prepare the training data properly. However, if the HDR video is encoded using a non-linear compression (for example, HDR10 and DolbyVision use the PQ function), you need to invert the mapping, and also transform to RGB if needed.
You should also be aware that even if it's an HDR video, there could potentially be a lot of saturated pixels in the highlights. Some saturation is inevitable, and there are many such examples in the training images we used to train the network (for example, not many images capture the brightest pixels of the sun). However, if the problem is more severe, it could affect the training negatively.
thanks, do you have a tool to extract the hdr image from the hdr video, considering the PQ function, so we can use the image as the input of the training phase, thanks.
Sorry, I don't think I can point you to any general solution for decoding arbitrary HDR-video. The Luma HDRv codec can encode and decode HDR-video, but given an arbitrary HDR-video it can't recognize the proper meta data. If the HDR video is stored in a Matroska container (.mkv) it could still be possible to use Luma HDRv, but you have to modify the meta data functionality, so that it doesn't attempt to find the appropriate meta data but instead use the metadata of your HDR-video (color space, bit-depth, PQ function, etc.).
Hi,
we have some HDR videos and plan to try to fine-train the model with these samples. but, HDR videos have different formats, and relative to metadata and curves. I do not know how to extract a frame from the video (with/without metadata curve), and which file format should be used for the extracted frame. Could you share it? thanks.