maximecb / noisecraft

Browser-based visual programming language and platform for sound synthesis.
https://noisecraft.app
GNU General Public License v2.0
1.06k stars 61 forks source link

Code generation for modules/groups (inlining) #19

Open maximecb opened 3 years ago

maximecb commented 3 years ago

The code in compiler.js currently doesn't handle modules, so we can't really use them in projects. To implement this, we should implement inlining, basically transforming the incoming graph until it contains no modules anymore and everything is flattened. This has to work recursively (modules inside of modules).

mindplay-dk commented 1 week ago

Should probably only inline smaller and/or few-instance modules?

For larger modules, a function call is probably worth while - since, otherwise, you're paying with potentially long compile times.

Very cool project btw! πŸ˜„πŸ‘

maximecb commented 1 week ago

Thanks :)

I'm hoping to increase performance of the audio thread by inlining things. The issue is that JS JITs don't optimize multiple return values AFAIK, so some other trick would have to be used such as storing the outputs on an object field, which is awkward.

mindplay-dk commented 1 week ago

Have you thought about generating WebAssembly bytecode instead?

I don't have any experience doing that myself, but would expect this is a whole lot faster than JS - you would get native 64-bit floating point operations, and you can use "arena" style memory management, avoiding allocations/deallocations at run-time entirely.

You might even consider adding a module with WASM source code input as well - for things that modules are less helpful for, such as FFTs, which could then be created in userland.

It's been a dream of mine for years, learning enough WASM to be able to build basically what you're building, but fast enough that it becomes a replacement for desktop products like Reaktor or PureData - but online, enabling users to build and share modules via community features etc.

(I wish I had the time or energy to get invested in this. πŸ˜… I built a shoddy prototype of a tool of this type in Pascal, probably 30 years ago while I was in school, and actually used it to build a reverb effect that still exists in a few products today. I worked in music software for a few years, but not since the 2000s. I just love that it's even possible to build something like this on the web now, and it's so cool that you're actually building it!)

maximecb commented 1 week ago

NoiseCraft already gets native float64 arithmetic. Modern JS JITs are good enough to unbox the values, especially in cases like this because the code that's generated is very flat and dumb. This isn't the bottleneck. I could also make better use of Float64Array and optimize the code more. That being said, one of the things that's awkward with JS is that we can't realistically have more than one audio thread. AFAIK we also can't have a MIDI thread yet, and it would have been nice to have some custom UI rendering, but I don't think I could make this fast in JS.

I have been thinking about maybe building a more advanced version of NoiseCraft in rust. I think I can make it very portable, but I don't know how easy it would be to make it run in a browser. That would make it more of a desktop/laptop app by default.

mindplay-dk commented 6 days ago

NoiseCraft already gets native float64 arithmetic. Modern JS JITs are good enough to unbox the values, especially in cases like this because the code that's generated is very flat and dumb. This isn't the bottleneck. I could also make better use of Float64Array and optimize the code more.

is it able to make use of SIMD optimizations? for some high level ops (e.g. adding or multiplying n numbers) the SIMD operations can still be up to 3-4 times faster, from what I've seen of JS vs WASM benchmarks. (?)

That being said, one of the things that's awkward with JS is that we can't realistically have more than one audio thread.

and an WASM, you could, right? this would definitely be a step up, I think?

it would also unblock the main thread for UI work, wouldn't it?

AFAIK we also can't have a MIDI thread yet, and it would have been nice to have some custom UI rendering, but I don't think I could make this fast in JS.

from what I know, Figma does UI rendering on canvas with WebGL - offloading to a GPU, if available, so even better than just offloading the main thread... but yeah, that is definitely not simple or easy to do.

I have been thinking about maybe building a more advanced version of NoiseCraft in rust. I think I can make it very portable, but I don't know how easy it would be to make it run in a browser. That would make it more of a desktop/laptop app by default.

while that would be cool, there are definitely pros to being JS - mainly, a much larger audience can read the code. πŸ™‚

also, I don't think you have a module yet with custom source code input, but that could also be a benefit to using JS - e.g. Reaper ships with a bunch of JS effects, but of course has some sort of hyper optimized custom run-time to make this possible, and it has the freedom to launch multiple instances of these in threads, and so on.

another potentially cool thing about the run-time being native JS is you could allow exporting the internal generated source code, and someone could take the generated code and use it in their own webaudio projects.

hmm, there is the AssemblyScript compiler, which can run in the browser - I wonder if this could be used to bridge the run-time generated code to WASM? you'd only need to add types. might be less disruptive to the project than trying to port it to WASM.

another approach might be web workers? you need a certain level of complexity before offloading anything to workers is worth while, but for example, as I recall, SynC Modular (a very old modular synth for PC) used the approach of offloading individual voices (in polyphonic mode) to separate threads - that's a reasonably simple approach, but of course only really useful for polyphonic synthesizers. (I'm not even sure if NC has polyphony?)

just throwing out ideas here. ☺️

maximecb commented 5 days ago

With respect to SIMD, I think there are other optimizations I would prioritize first if the goal was to increase performance.

it would also unblock the main thread for UI work, wouldn't it?

No, in JS the main thread is tied to the browser event loop, and NoiseCraft already renders audio in a background thread.

from what I know, Figma does UI rendering on canvas with WebGL - offloading to a GPU, if available, so even better than just offloading the main thread... but yeah, that is definitely not simple or easy to do.

If you use WebGL, you have to trust that different browsers will support it correctly and hope that your shaders will render the same across different implementations.

A lot of the design choices behind NoiseCraft are about keeping things as simple as possible. That's why it's been working reliably for years whereas a lot of other browser-based music software doesn't.

mindplay-dk commented 5 days ago

No, in JS the main thread is tied to the browser event loop, and NoiseCraft already renders audio in a background thread.

the more I try to read up on this, the more perplexed I get. 😐

if I understand correctly now, the "audio rendering thread" they talk about in the AudioWorkletProcessor spec really refers to a single OS thread? every processor instance runs on the same, single thread?

for crying out. πŸ˜…

so they designed an audio API that is restricted to everything running in a single thread?

I mean, from what I could read in Github issues by the working group, they actually knew this, but apparently they were constrained by the browser and language design itself, so it's not that I'm trying to place blame here.

as for Workers and WebAssembly, it sounds like these were not designed for the kind of real time scheduling that the web audio API requires - I've seen examples, and it's possible to offload to workers, but you're adding your own buffer over the internal web audio buffer, adding latency. it sounds like that's not really a good way to go either.

so we're essentially stuck with a single thread until the web audio working group figures something out? πŸ₯²

as for your idea to rebuild in Rust, well, this would give you a desktop version that runs great - but since the web version would run on WASM, it would be limited by the same constraints. where the compiled WASM module intersects with WebAudio, it will need to use a shared array buffer, same as any worker, meaning high latency... it would probably be fine for step sequencers and audio sculptures etc. but not great if you want to play your MIDI keyboard.

A lot of the design choices behind NoiseCraft are about keeping things as simple as possible.

this is definitely a big part of the attraction for me - I am all about simple code we can read and understand. For example, I love the fact that you don't have an entire architecture of abstraction around block types - the "if this then that" model is so straight forward. there is nothing overwhelming in that file at all, handling for each block type is mostly simple. It's great. πŸ˜„πŸ‘

ugh, I really thought WebAudio was more mature than this. πŸ«