In these days of ludicrous CPU core counts (gaming PCs, consoles) and even multiprocessor systems returning to the desktop (in DAWs, VEP slaves etc), it's rather restrictive to run audio in a single thread only. However, low latency realtime processing is inherently problematic as it is, and much more so when multithreading is involved, especially on general purpose operating systems!
While one could take one's chances with lock-free synchronization across "realtime" threads, maintaining full low latency operation across multiple threads is very difficult on anything but a proper RTOS, and should not be relied upon. At best, we might do something on the "main" thread (the one on the audio interface) to reduce the impact of slave threads missing their deadlines, or at least, skip any late parts, to avoid the entire audio mix glitching.
To make it more reliable, we can allow for more latency in specific parts of the processing graph, so that these parts can run on slave threads with additional buffering. That way, we can run multiple threads with the same "glitch tolerance" as a single thread, much as if some audio was rendered in deeply buffered background worker threads (see #345), but still have a defined "lowish" event latency for all parts of the graph.
It might be helpful to allow A2S to specify desired latency for programs, so that the engine can automate the distribution of the graph across threads. That way, you can for example demand that weapon attack sounds run on the main thread, whereas music and ambience sounds are allowed to run on buffered threads whenever multithreading is enabled.
In these days of ludicrous CPU core counts (gaming PCs, consoles) and even multiprocessor systems returning to the desktop (in DAWs, VEP slaves etc), it's rather restrictive to run audio in a single thread only. However, low latency realtime processing is inherently problematic as it is, and much more so when multithreading is involved, especially on general purpose operating systems!
While one could take one's chances with lock-free synchronization across "realtime" threads, maintaining full low latency operation across multiple threads is very difficult on anything but a proper RTOS, and should not be relied upon. At best, we might do something on the "main" thread (the one on the audio interface) to reduce the impact of slave threads missing their deadlines, or at least, skip any late parts, to avoid the entire audio mix glitching.
To make it more reliable, we can allow for more latency in specific parts of the processing graph, so that these parts can run on slave threads with additional buffering. That way, we can run multiple threads with the same "glitch tolerance" as a single thread, much as if some audio was rendered in deeply buffered background worker threads (see #345), but still have a defined "lowish" event latency for all parts of the graph.
It might be helpful to allow A2S to specify desired latency for programs, so that the engine can automate the distribution of the graph across threads. That way, you can for example demand that weapon attack sounds run on the main thread, whereas music and ambience sounds are allowed to run on buffered threads whenever multithreading is enabled.