immersive-audio-live / ADM-OSC

An OSC dictionary that implements the Audio Definition Model (ADM)
https://immersive-audio-live.github.io/ADM-OSC/
MIT License
68 stars 10 forks source link

ADM-OSC

An industry initiative to standardization of Object Based Audio (OBA) positioning data in live production ecosystems, by implementing the Audio Definition Model (ADM) over Open Sound Control (OSC).

https://immersive-audio-live.github.io/ADM-OSC/

Project Originators

L-Acoustics, FLUX::, Radio-France

Project Contributors

Adamson, d&b Audiotechnik, DiGiCo, Dolby, Lawo, Magix, Merging Technologies, Meyer Sound, Steinberg

Context

Immersive audio is gaining ground in different industries, from music streaming to gaming, from live sound to broadcast. ADM or Audio Definition Model, is becoming a popular standard metadata model in some of these industries, with serialADM used in broadcast or ADM bwf or xml files used in the studio.

Motivation and goals

Approach

Bijective mapping of the Object subset of ADM with a standard OSC grammar.

Why OSC ?

General principles

Current status

The current dictionary covers most Object properties from the Audio Definition model. A more complete dictionary is being discussed to cover the remaining parts of the Audio Definition model. OSC Live test tool (talker and listener OSC Live test tool) is now available.

Current Specification

See Repository.

Current Discussions

See Issues.

Current development & tests tools

Currently supported in

SPAT Revolution (FLUX::), L-ISA Controller (L-Acoustics), Ovation (Merging Technologies), Nuendo (Steinberg), SpaceMap Go (Meyer Sound), QLAB 5 (Figure 53), Space Controller (Sound Particles), Modulo Kinetic (Modulo Pi), Iosono (Barco). FletcherMAchine (Adamson)