quietvoid / hdr10plus_tool

CLI utility to work with HDR10+ in HEVC files.
MIT License
297 stars 33 forks source link

[feature] Add Dolby vision to HDR10+ conversion #37

Closed cromefire closed 3 years ago

cromefire commented 3 years ago

As dovi_tool now supports HDR10+ JSON to RPU conversation, but the other way around would be a great feature to. In the best case scenario it's just the same mapping reversed, worst case it's non reversible, so this feature wouldn't be possible to implement.

This feature could be very useful for some people I guess because:

cromefire commented 3 years ago

For HDR10+, it's not really possible without image analysis (luminance percentiles). And since Dolby's tonemap is proprietary, we don't know how it would convert to a Bezier curve.

Image analysis would probably be possible but yeah that does sound like a lot of work that probably no one is willing to spend. I guess you don't need to do that for the HDR10+ to DV conversion? Regarding Dolby's tonemap, how did you solve the problem for the HDR10+ to DV conversion? In that case you'd have to map the Bezier curve to their tonemap, right? Or is this a case of something like Dolby supporting their own tonemap an Bezier curves and almost all DV stuff does have Dolby's own tonemap?

I'm also not aware of a public AV1 encoder supporting HDR10+ metadata from binary nor JSON. There's only APIs but no frontend access.

That is true and a thing that probably only time will solve. There already is a specification for the handling of HDR10+ and Samsung seems to be invested in it and seems to have a working sample, it might even already be part of their 8K TVs (which support AV1 AFAIK), which a can't confirm, because I don't have access to one of those.

quietvoid commented 3 years ago

Regarding Dolby's tonemap, how did you solve the problem for the HDR10+ to DV conversion?

The HDR10+ to Dolby Vision conversion is simply adding L1 metadata, which is the min/max/avg brightness of a shot. That data is already present in HDR10+ metadata, on a different scale (0 to 100 000 instead of 0 to 4095).

Proper HDR10+ requires a Bezier curve to adjust the mapping for the target display, and there's no public algorithm for generating it.

And Dolby specifies multiple parameters to adjust the tonemapping, which can't be directly translated. They are also creative intent, while HDR10+ is based solely on algorithms.

I don't know how HDR10+ displays would react with bogus data, because I don't own one.

This isn't something I'm willing to invest time into, because I have no way to test for myself.

cromefire commented 3 years ago

Okay, thanks for the insight

cromefire commented 2 years ago

Just like EAC3 mp4 is better. Morover it is a good idea to use mp4v2.

Well some of the features and support (not to mention tooling) are just not available on mp4 and also turns out just the problems are with the DV decoder of the mediatek chipset (of my CCwGT) and not MKV (although the TV itself for example seems to only be able to play DV from m2ts). But that's not the core problem of this issue.