Open andredublin opened 9 years ago
The strategy here is to build a library of dsp components in: https://github.com/Buzztrax/buzztrax/tree/master/src/lib/gst and use the in the actual elements, e.g.: https://github.com/Buzztrax/buzztrax/blob/master/src/gst/audio/simsyn.c
More ideas and planning: https://github.com/Buzztrax/buzztrax/blob/master/src/gst/audio/TODO.md
I understand the flexibility we'd get from an interpreted language, especially that people can just write and run the plugin without a compile step. But it is more complicated to get there than it sounds. buzztrax queries gstreamer to get a list of plugins. People managed to write such plugins also in languages such as python, but I am not aware of javascript one. For now I want to make writing them in c (like the existing ones as simple as possible). Another option could be to have the code in a script like config file and build the plugins on the fly from those configs. Although I admit I am not keep on writing a parser - maybe something like json is enough.
@andredublin @ensonic As I think, JS and especially V8 case isn't good choice for RT audio computing. Python is much better for that. V8 uses only one thread, and if you set timer or something, — V8 can interrupt your RT-code for that, you know, dropouts and other bad things. I wrote this thing: https://github.com/unclechu/node-jack-connector and I used dirty hacks for RT. If you need to write high-level audio RT-code, — Python is better choice instead of JavaScript.
This idea is way out there, but I think it would be awesome to bind to v8 that we users can create VST via javascript or a subset of javascript. It might also allow use to ship the application in a node.js/webkit wrapper.
Let me know your thoughts.