tim-janik / anklang

MIDI and Audio Synthesizer and Composer
https://anklang.testbit.eu/
Mozilla Public License 2.0
54 stars 3 forks source link

Can backend and frontend be running on separate machines? #8

Closed falkTX closed 1 year ago

falkTX commented 1 year ago

Hi there. Very interesting project! This is more of a question rather than bug report or feature request.

Since you are using web stuff for the frontend, and I see some websocket related code, would you say it is possible to run anklang audio on a machine, while being controlled from another? (using a browser window). This obviously breaks the use of plugin GUIs, but rest should still work fine...?

Is this a support a supported configuration? Are we able to build and run the web stuff separately, so one uses regular browser instead of electron?

Thanks again.

tim-janik commented 1 year ago

Great question. Let me dig a bit into the architecture for that.

Synthesis Core: The synthesis engine (lib/AnklangSynthEngine) is written in C++ and implements the audio drivers, CLAP interface, synthesis threads, etc. Some of my local changes already implement autoplay, and WAV/OPUS export, so the engine can be run on its own in headless mode for playback or unit tests.

UI: On top of that, the engine exports its API (ase/api.hh) via JSON RPC through a websocket interface. The UI is actually implemented via HTML/CSS/JS files that are also served through the WebSocket. This allows operation of the UI in a local browser, the latest Firefox or Chrome should work fine for that. The UI currently consists of a number of files the browser loads at startup, and after that all communication with the engine goes through JSON RPC calls that the JS code initiates. In order for Anklang to "feel" like a normal DAW for its users, we install the "anklang" executable, that is actually an Electronjs binary that starts the synthesis engine and runs the UI. That is just one way to run the UI however, you can also just start lib/AnklangSynthEngine directly and browse the UI at http://127.0.0.1:1777/.

Remote UI: For now, the websocket server only listens on localhost, but the UI could be used remotely via tunneling or adjusting the code to listen on all network interfaces. But that would still render and play the audio on the host that runs AnklangSynthEngine. What we might implement in the future is to use an Opus WebRTC stream as output instead of ALSA. That'd allow running the UI remotely and listening to the output on the machine that runs the web browser. That would probably be nice for demoing Anklang (but not really work well for low latency audio rendering needs). This also needs serious security considerations, since the UI can tell the engine to do file IO, the engine probably needs sandboxing (docker) and resource limits if the port was to be opened up to the internet.

Remote Plugin UIs: The above scenario would only work with headless plugins. One avenue that could be explored is a setup that runs the plugin UIs in a VNC server and provides remote access to those vie a web browser based vnc-client. That'd probably provide a mixed user experience though, since plugin UIs are usually not optimized for remote rendering.

Remote API: Another thing that'd possible (but not yet implemented) is to use the API through just nodejs (remote or not) without the UI, that means all the interfaces and methods the UI normally uses are readily available for arbitrary script execution or REPLs. In a way, I'm making use of that already, through the web browser DevTools JS console. If you have the console open, you can just type Ase.server to get a handle at the global singleton through which all of the api.hh is accessible (but you need to await the result of all method calls, since all calls are implemented via async RPC calls). Also, api.hh is far from stable atm, if serious external uses for this interface would arise, we would effectively need to follow proper library-like semantic versioning for our API.

Hope that helps, I'm also on #lad IRC btw.

falkTX commented 1 year ago

Yeah, very informative, thanks! Please write this down somewhere in the documentation, it is some very useful information to have.

for this though:

For now, the websocket server only listens on localhost, but the UI could be used remotely via tunneling or adjusting the code to listen on all network interfaces. But that would still render and play the audio on the host that runs AnklangSynthEngine. What we might implement in the future is to use an Opus WebRTC stream as output instead of ALSA. That'd allow running the UI remotely and listening to the output on the machine that runs the web browser

I actually want the reverse, for the audio to stay server-side and only the UI on client side. This allows, for example, to have the audio running on a remote media station on the local network while any system with a browser can control the running audio stream.

An alternative would be to be running jack with ffmpeg that serves the audio through WebRTC. I already have that working in https://try.mod.audio/ (press "enable streaming" after loading).

Even another alternative would be to build the server code with emscripten, so it would run on the browser directly. But performance is not that great. I have an example of that in https://cardinal.kx.studio/

Personally I am quite interested on the server-side-audio + remote-ui but anyhow seems we can talk more on IRC

tim-janik commented 1 year ago

Yeah, very informative, thanks! Please write this down somewhere in the documentation, it is some very useful information to have.

OK. To a good extend, this regards considerations rather than documentation, so I have added this to our wiki (not the manual): Architecture