Open zz85 opened 12 years ago
Uh, interesting :) Having something like WebAudio API passing the volume to the shader could be fun too.
tada! created a new branch if you'd like to experiment with.
https://github.com/zz85/glsl-sandbox/tree/audio
for now, you need to click the "Play Music" button and put the path to a music file you have on your server. It also falls back on dsp.js for firefox browsers.
Firefox and Chrome levels may seem a little difference, so to have perfect consistency, we could use audiolib + madlib for javascript based mp3 decoding... I'll upload a version to my webhost for playing quickly :)
okay, here we go!
http://jabtunes.com/labs/glslaudiosandbox/minecraft.html <-- minecraft + audio
http://jabtunes.com/labs/glslaudiosandbox/ <-- default example + audio
:)
also just thinking, potentially could use some annotation like
/*#music('path')'*/
to make music play in glslsandbox gallery. okay, heading to sleep! (^^)/~~~
Oh wow, that adds completely new dimension to the sandbox ;).
Which reminds me - I was thinking maybe we could also have uniforms for mouse buttons events (I often find myself unconsciously clicking on these demos).
It should be relatively simple to implement and would add more interactivity options (especially for these crazy games that started to pop-up).
@zz85 awesome stuff! will think on how to integrate in a smooth way.
@alteredq yeah, I was thinking that too. specially a mousedown
boolean, for people that try to do drawing stuff.
Very cool :)
Talking of new input options, http://learningthreejs.com/blog/2012/02/07/live-video-in-webgl/ shows how to use WebRTC to get live camera input. From messing around with Paragraf app on the iPhone I'm sure this would be really fun...
Since you guys are considering new mouse activity on this thread, take a look at my pan/zoom demo:
http://warm-journey-1887.heroku.com/e#21.4
Click "hide code" and use the left mouse button to pan, right mouse to zoom the fractal. The code for this is on my branch in GitHub. I'm pretty sure it won't break any existing shaders, but I need someone to test on AMD cards. It's been tested on nVidia and Intel hardware.
Audio uniforms would be nice. Also brings to mind gpu-based audio filters, but that's a whole different animal...
Yes i also like the idea of audio filters, we just need an output audio array of size resolution.x*resolution.y and a samplerate related to time, then we could fill that every frame to get a nice samplerate, we could then also generate sounds, e.g. a simple sinewave-based chord:
float audio_time=samplerate*time;
float audio_out_pos=gl_FragCoord.x*resolution.y+gl_FragCoord.y;
float audio_pos=audio_out_pos+audio_time;
float freq=440 *2.0*PI;
audio_out[audio_out_pos]=sin(audio_pos*freq); //chord root
audio_out[audio_out_pos]+=sin(audio_pos*freq*pow(2.0,3.0/12.0)); //chord third
audio_out[audio_out_pos]+=sin(audio_pos*freq*pow(2.0,7.0/12.0)); //chord fifth
this has been on my (and perhaps others') mind and it would probably take a day to integrate but adding this here so I might get to do it sometime ;)
the idea is to run a spectrum analyzer on some background music and pass in the dynamics as an
audio
uniform of value between 0 and 1.and i initially thought of using the audio wrapper I wrote https://github.com/zz85/audiokeys.js but perhaps we could use the soundcloud or echonest api (which is probably easier to host it on heroku, while providing music in the sandbox :) too?