nomelif / Audionodes

Audio generation in blender nodes
Other
89 stars 10 forks source link

Oscillator synchronization #21

Open ZackMercury opened 5 years ago

ZackMercury commented 5 years ago

I want these 2 oscillators to have the same phase offset, to be synchronized to form a chord both phase offsets are set to 0, and their playback is absolutely not synchronized. image

Here's what I'm doing, just some fun experiments. https://www.youtube.com/watch?v=EKgEFXkxCBg

ollpu commented 5 years ago

Cool!

As you might've guessed, this happens because the clocks were started at different times and there's no way to reset them. A simple solution would be adding a global "Reset" button which resets everything, or do you have some other solution in mind?

Your demo also gives me some non-critical node ideas:

ZackMercury commented 5 years ago

Maybe you need to have some time variable input into each of the oscillators so that you can synchronize and more precisely work with time. Which by default will be taken as the first one that is added in the node setup. Or maybe if it's not plugged it will be generated like it is working currently.

ZackMercury commented 5 years ago

Also time plugged into the noise and everything that creates waves.

ZackMercury commented 5 years ago

Also this way you could use anything as time, including the "value" node with keyframed value of time. This way, you can render oscillators exactly in order and speed you need (besides the frequency control)

ZackMercury commented 5 years ago

And make repeated patterns with noise (using modulo on time)

ZackMercury commented 5 years ago

The only problem with this I see is keyframing time is a bad idea because you usually render at 24 fps which is huge for the sound waves.

ZackMercury commented 5 years ago

Maybe you could keyframe, when you start the clock? Like, make a playback speed slider, which you can keyframe Or hmm...

ollpu commented 5 years ago

I've also thought about some kind of time signal system, but tying that to Blender's timeline seems really intriguing. I hadn't really thought of that.

Ideally keyframed animations would be sampled in the backend (i.e. at full audio sampling frequency), such that the keyframe points and maybe even curve types are copied to the backend. This would also mean that the sound could be exactly the same each time, instead of having the inevitable jitter when the values are updated from Blender continually. Not to mention the ability to bounce the audio (render to a file faster than real-time). Maybe play a MIDI file according to the time as well? I dream of a piano roll editor inside Blender, but that's probably not going to happen without an external window.

The Oscillator node is fairly cluttered already, so I was thinking this externally clocked version would be a separate node, not sure if that's a good idea. I feel like live MIDI-based stuff is still the main paradigm with Audionodes. But hey, this timeline paradigm could turn out really fun.

I guess having a node that exposes the current Blender playback position would be a good start then.