danigb / web-audio-assembler

https://danigb.github.io/web-audio-assembler/examples/
MIT License
10 stars 0 forks source link

Support for alternate engines, management and playback #1

Open zoomclub opened 8 years ago

zoomclub commented 8 years ago

First, I'd like to say that this project is a big step towards answering questions many in the Web Audio community have had since forever (evident from reading the link on the need for a standardized framework for building and including Web Audio API instrument/effect "patches", which is listed at the bottom of this repo).

Representing the node graph descriptor as serializable JSON is key and having the descriptor spec remain as open ended as possible will allow it to become a standard. I'm really hoping that the descriptor can accommodate a number of sound engine options, including: standard Web Audio nodes, the new AudioWorklet (https://github.com/WebAudio/web-audio-api/wiki/AudioWorklet-Examples) and perhaps also Soundfonts. Recently, I saw Genish (https://github.com/charlieroberts/genish.js) and it would be outstanding if the descriptor could integrate Genish nodes as well, compiling them as part of the load process.

To make this work well it may also be good to have a timbre/instrument manager that manages a map of activated timbres. The manager would have the de/activate as well as search and route functions that would connect tracks in scores to desired timbres. It already looks like scheduling is part of web-audio-assembler. If so, how would multiple independent tracks each targeting their own timbre be played in lockstep at once?

danigb commented 8 years ago

Hi @zoomclub,

thanks for your encouraging words. I didn't see the AudioWorklet-Examples before. They are very useful (I will steal, for sure, the parameter descriptor syntax).

I don't know yet what will be the scope of the project, but I want two things: simplicity and extensibility. What I have in my head (but not in the code yet) is that web-audio-assembler basically orchestrates the creation of (audio node) objects from a graph descriptor, a kind of blueprint. So basically it simply iterates over a series of "plugins" (one for creating audio nodes, other for connect them, ...) that will refine the output object. So, with this idea it should be quite easy to support any alternate engines.

About the scheduler: I didn't plan to build any kind of scheduler (I've built a couple and I know they needs work) but instead use the one that web-audio-api provides. The main purpose of the scheduler is mutate the node object in a descriptive way... I think the ideas you're suggesting (tracks and more advanced stuff) are out of scope, but I think It could be implemented with waasm if desired.

The other leg of the project is the node descriptor. I think it's important (like in the AudioWorklet Example before) that you can obtain a detailed descriptor of the node you just created in order to work with new and non-yet-existing nodes in a programatically way.

Anyway, thanks a lot for the feedback!