Open WernerBleisteiner opened 3 years ago
Hi,
If I remember correctly there was some discussion about this issue a long time ago in the EBU group.
To resolve this, I think these are the relevant questions, for standardisation at least:
What should the renderer do with binaural signals when rendering to loudspeakers? Two options:
What should the renderer do with directspeakers/objects/HOA when rendering to headphones? Two options:
For the first question, it seems clear that we need more metadata, because both options are not good in some circumstances.
For the second, it's clearer what to do, but it's not really within the scope of the EAR. Hopefully this will be resolved soon.
We could make some changes in the EAR to help in the meantime:
Would either of these help? Other suggestions would be welcome!
Hi Tom,
thanks for regarding this. From a practical (broadcasting operations) point-of-view I argue, that a binaural-labelled ADM signal should be passed straight as a two channel signal into M-/+30° - regardless the chosen loudspeaker layout, respectively by-passing any processing in a binaural renderer. A additional binaural rendering of a genuine binaural signal is in any case absolutely to be avoided.
We do have quite a few legacy and recently created binaural "dummy head stereo" assets in our archive, that should be described accordingly in ADM. Their applicability within an EPS and EAR based workflow would really be an advantage.
There's one technically similar/related but perception-wise absolutely different use case (that's important in audio-only/radio as a dramaturgic element): "head-locked stereo" (as FB360 calls it) or "in-head mono" - both kind of signals require to by-pass any binaural rendering. But that's rather another issue As far as I've tested IRT's nga-binaural-renderer, in-head localisation is generated (bypassing binaural rendering) when an object is located at 0/0/0.
The processing of HOA for -static- binaural rendering is also not my concern here.
Appreciating ADM as universal for not just creating genuine object-based and interactive experiences, but also a unique format for storing and archiving various audio formats/mixes in one asset (like to call this "stacked legacy"). However, ear-render up to know cannot handle correctly described 2 channel "binaural" audio - AP/AT_00050001 and states "Don't know how to produce rendering items for type Binaural". See attached screenshot. For us (BR), this interferes the implementation of ADM in a proposed piloting workflow. This issue is related to the respective one for ear-production-suite