Open jonlab opened 8 years ago
I think it is fine to only synchronise parameters. The other thing which seems important to me is a controllable envelope (but maybe this is another entry?).
Another question is the level of access of user at runtime. Do we want the user to be able to instantiate and change parameters at runtime (during "play mode") or is this something that must be designed in Unity and exported (and not controllable at runtime) ?
At the moment, I would advise not to add real-time audio control unless it has been preset in Unity. I suppose this feature will be excepted sometimes, but maybe later.
Roland
Le 9 déc. 2015 à 19:17, Jonathan Tanant notifications@github.com a écrit :
Another question is the level of access of user at runtime. Do we want the user to be able to instantiate and change parameters at runtime (during "play mode") or is this something that must be designed in Unity and exported (and not controllable at runtime) ?
— Reply to this email directly or view it on GitHub https://github.com/jonlab/NewAtlantis/issues/24#issuecomment-163347140.
Maybe this will be clearer with an example : Let's say that we have an oscillator. This is a component (NAAudioSynthOscillator) with the following parameters (this is what I am talking about) : -duration (s) -frequency (Hz) -waveform (sin, cos, square, triangle, sawtooth)
A user is able to create an objet IN UNITY and fill in the parameters (let's say that he wants a 440 Hz 10 s sinus). He can then turn this object into a New Atlantis object and put it in a space. possibly several instances of the same object. But without an access to the parameters at runtime, he will not be able to change to something else than a 440 Hz 10 s sinus). We would have actually 2 things to take care of : -serialization of parameters to database (the objects are persistant and for now the only parameters that we store to the database are the position and rotation of the object. And that's it). -(graphical) interface : how do you change the parameters ? (buttons, 3D, 2D, text commands, ... ?)
Maybe we can start with something simple close to what was implemented with the Audio Trunk : a small self-contained component with a simple GUI renderered in the scene : I did a small test with the NAAudioSynthOscillator :
This script can be used in 2 different ways :
Pushing things further, we could for example integrate a Csound-like sound synthesis script parsing...
And, we could do GUI toggling with a special tool that would reveal the GUI (maybe we don't want all the GUIs to be always active at the same time).
Enveloppe: ASR [ % % %]
The Audio synthesis components still needs to be attached to an Audio Source and any Audio Source triggering component can be used. So you can make a 440 Hz sinus composite object by attaching : -an AudioSource properly configured (Rolloff, volume...). -an NAAudioSynthOscillator properly configured. -an NAPlayOnCollide to make the AudioSource play on a collision.
What do we need ? noise, additive synthesis, fm, ... Do we guarantee a deterministic synthesis (we only synchronize parameters) or do we synchronize once the audio is generated ?