nomelif / Audionodes

Audio generation in blender nodes
Other
88 stars 10 forks source link

Polyphony #7

Closed zeffii closed 5 years ago

zeffii commented 7 years ago

What's the plan for polyphony? Will the frequency socket take a list of frequencies and volumes, or something. I bet it would be useful to be able decompose the socket content to get Hz and previously passed amplitudes. Do you have design docs about this?

zeffii commented 7 years ago

The callback does currently assume single signals. Of which time and chunk length would probably be ok to assume is the same for multiple signals entering and exiting a node

nomelif commented 6 years ago

@zeffii (Putting this up here for lack of a better contact)

Hello. We have hit a wall with the current implementation. We have been thinking about a redesign: move all the real-time workload into C++ and make it a modular (read: works without Blender) Python library. This is an occasion to rethink the node code: the current solution of carrying logic and state in the nodes causes Blender to crash when the nodes are mutated while we rebuild our internal graph. Could you give us any words of wisdom on how to design the nodes?

zeffii commented 6 years ago

i'm glad to hear this, but i can not reply in full tonight.

nomelif commented 6 years ago

Sure, no pressure.

zeffii commented 6 years ago

Detaching Blender from the AudioNodes code is sane, especially with chunked real-time audio.

the current solution of carrying logic and state in the nodes causes Blender to crash when the nodes are mutated while we rebuild our internal graph

mutated nodes means? adjusting a slider? or deleting and reregistering a node class?

nomelif commented 6 years ago

A node gets deleted when we attempt to read it (from another thread); somehow this does not raise an exception but blow blender up.

nomelif commented 6 years ago

(by node I mean individual node in a tree)

nomelif commented 6 years ago

But that is not really the point: we know that the problem stems from asynchronous access to the nodes, which they do not guarantee as safe. I brought it up to highlight the flaws in the current design and motivate our redesigning it. Knowing your experience with node add-ons and knowledge of the project, we wanted to know if you had anything too say about how we should go about redesigning it.

zeffii commented 6 years ago

ok. I didn't realize you were using multiple threads.. there's no clean multithreading with python in Blender ( there's some talking about it on BlenderStackExchange ) .

zeffii commented 6 years ago

i will need to look closer at the current AudioNodes code, but you may want to study the architecture of Supercollider which separates the language (nodes.. in this case) from the control messages ( osc.. or direct sockets )..to the engine.

nomelif commented 6 years ago

We are going to nuke almost everything; there is little to no point in reading our existing code. As for supercollider, we will make sure to take look.

nomelif commented 6 years ago

So here we are. We have finished a working prototype of the c++ port for Mac and Linux. I wrote a longer post on the subject here: https://github.com/nomelif/Audionodes/issues/10

ollpu commented 6 years ago

Oh, and now, finally, we have a proper plan and a working implementation of it for polyphony.

You know, most sane synthesizers implement polyphony by isolating voices from each other and evaluating the whole synthesizing stack on each of them separately. But there are some downsides to this, of course, such as the fact that you can only connect one MIDI-input to the whole network. So we wanted to take a different approach. Aside from seeming easier to implement way back when with Numpy (simple parallelisation), we wanted to make the system so modular, that you could plug in multiple MIDI-inputs into the network and allow them to be mixed as needed.

So for each chunk, each node is evaluated exactly once (as long as it is directly or indirectly connected to a sink). It takes in each channel of polyphony and does what it needs to do with them at once. The way we did this in the old Python/Numpy implementation had a lot of issues. Things that weren't a MIDI input would just output a signal with one channel. Then when some node was receiving monophonic signal on one socket, and polyphonic signal on the other, it would broadcast the mono signal to the appropriate size. But this often only worked correctly when the input sockets were plugged in the correct way! But the elephant in the room was state: some nodes (such as oscillators) need to store state that would persist to the next chunk for each channel of input separately. Okay, fine, just store the state. But what happens when the amount of channels changes (new note pressed or released)? The system had to be able to correlate old state to channels it is receiving currently, no matter how the channels changed. And this was done horribly in our Python implementation: for each voice we took a timestamp of when it started, and that worked as a unique identifier. If a node needed to store state, it had implement the Numpy array shuffling itself. Not to even mention trying to mix up different MIDI inputs.

We thought long and hard about just what would make for a sensibly behaving, yet modular system. We settled on the following. A node describes what "universes" of polyphony it takes in and spits out. The term "universe" here is a bit hard to explain, but let me try. Nodes, such as a MIDI input are creators of a new universe. Anywhere in the network where we see the same universe coming from different sockets, we know we can safely mix them up, since the channels are guaranteed to be the same. If, however, we receive different kinds of universes in different sockets, the node gets to choose which socket it prioritizes, and all other incompatible channels will be flattened (sum of all channels into one) and then broadcast to the size which the chosen universe is currently at. A universe also describes how the channels changed: which notes were removed, and how many new ones were added. This (along with some helper functions) provides a simple interface for the node itself to handle its state.

Phew. I'll write in more (excruciating) detail once we have the time to make proper documentation.