olofson / audiality2

A realtime scripted modular audio engine for video games and musical applications.
http://audiality.org/
zlib License
79 stars 5 forks source link

Proper music and sound design showcase #352

Open olofson opened 1 year ago

olofson commented 1 year ago

However "interesting," I feel the current demos are kind of rubbish, and are at best vaguely hinting towards the capabilities of Audiality 2. This needs to be corrected.

Plan:

  1. Basic VST plugin, or at least a decent virtual MIDI solution, so one can wire A2 to a DAW for some proper music composition.
  2. Rudimentary live A2S editor for quick, interactive editing.
  3. Live monitor tool, with graph visualization and performance metering.
  4. Create a bunch of sound effects/objects, showcasing the parametric capabilities, and infinite variations made possible by real time synthesis. Ideas:
    • Explosions.
    • Engines, with a "lively" nature, and proper throttle and load response.
    • Weapons.
    • Structured, parametric ambiences.
    • Modeled, parametric footsteps.
    • Semi-structured music, combining traditional audio tracks and samples with live synthesis.
    • Interactive music.
  5. Tech demos, showcasing unique key features of A2:
    • User defined mixer/bus/track/voice/event... structure. "Build Your Own Engine."
    • Lightweight voices, with sub-sample accurate timestamped events. Demonstrate how approaches that will bring other middlewares to their knees (extreme event rates, timestamping, high voice counts, ...) could be perfectly viable for prototyping, or even production, with A2.
    • Lightweight sub-sample accurate scripting, allowing the implementation of complex interactive sound objects, and even custom synthesis algorithms, without having to resort to custom plugins.
    • Worker threads to distribute music, ambiences, and other "high latency" audio over multiple CPU cores.
    • Offline rendering, using the same assets as for real time playback, to easily create infinitely complex audio without the need for manual bouncing.
    • Using offline rendering to automate the creation of LOD levels for complex sound designs - like SpeedTree™ for audio.
  6. Wrap it all into some sort of interactive "game," using some 3D engine. UE springs to mind, but if we're using Godot for the authoring tool, just using that for the demo(s) as well might make more sense.

The tests already cover some of these concepts, but they're very minimal, audio-only (no GUIs or anything), and IIRC, the only "documentation" is brief explanations in the form of code comments.