Closed greggman closed 10 years ago
I've been planning to rewrite it to be compatible with AudioContext and make it streaming, but haven't yet taken the time for it. AudioContext doesn't still work nicely on many platforms, so making it a dependency wouldn't be a good idea; but making it optional probably would. http://caniuse.com/audio-api
The gist of what I'm planning to do is this https://gist.github.com/egonelbre/9990ae0520e4c09a9b0e . I'm not sure whether making a separate generator prototype will be faster, I would have to test that. Anyways, this would allow it to be used more nicely with the AudioContext without explicitly depending on it.
I have some ideas to make it modular as well i.e. runtime code generation. https://gist.github.com/egonelbre/9990ae0520e4c09a9b0e#file-gistfile2-js
Basically define all the modules and based on the definition generate the "makeGenerator" function. It could be easily extended to make different modulators for different values.
Of course, in the progress I would try to clean up the code.
But, yeah I'm not quite sure when I get around to doing that... alternatively, if you are interested I'm willing to pull those changes.
That's cool.
For my needs, games, streaming doesn't seem desirable. I need 100% of the CPU for me so I'd prefer to generate the sounds upfront.
Streaming would be cool as an option though, you could do bytebeat stuff
https://github.com/greggman/html5bytebeat
:)
It's always possible to generate the whole sound with streaming, but not the reverse. The main reason for streaming is that the generation currently takes ~25ms, this means when you want to randomly generate the sound in a game you would basically blow your CPU limit; but if you do it with streaming you can do it in increments.
tl;dr; streaming allows random sound generation while game is running.
I'm not sure you want this or not given that it's pretty simple.
I did test it though by editing jsfxgui with the following patch