Closed toby-yoyo closed 1 month ago
Various things have changed due to this task:
audio_channel_num
audio_debug
numProcessed
column has been removed as it will only ever display 0
due to changes to when audio buffers are (un)queued.syncSource
column has been added which shows which source each source is synchronised to. This will display -1
for sources that are not syncing and for sources that are being synced to by others. audio_stop_all
etc.). Note that sync group sounds are currently played with the maximum possible priority.audio_sync_group_debug
has been stubbed. Use audio_debug
to debug sounds played together in sync groups.Most of this task is not user-facing. It basically reworks the systems with which we handle compressed audio and sounds that are streamed from a buffer queue rather than a single static buffer. The user-facing side of this is detailed in the comment above.
Used this project to attempt to test the changes: https://drive.google.com/file/d/1LR-OnaADffs5y6qA9Fu01W0IOkhF3wS_/view?usp=sharing
The old decoding threads would have in the past fallen over when running many of these tests. A number of those issues were patched up for 2024.8 but some will still cause problems in that version too.
The tests don't specifically test features and are not representative of anything like normal GM usage, but they try to stress the audio engine in various areas associated with this task in order to find crashes, data races and similar issues.
Documentation updated as per Toby's comment
On Beta IDE v2024.1100.0.626 Runtime v2024.1100.0.652, the project attached crashes on HTML5.
Reopening the issue. Thank you.
Apologies, I should have mentioned that this is a C++ runner-only change and isn't applicable to the HTML5 runner as they have two entirely different audio systems. The project wasn't really written in HTML5 in mind.
That being said, the project did actually expose a few bugs on HTML5, which I've raised the following issues for:
I've also edited the project to be a bit more HTML5-friendly, which can be accessed from my comment above. With some quick-fixes in for #7804 and #7805 on my branch, the project runs the whole way through without crashing, though it does complain about #7806 which we should fix too.
Re-closing this one as the HTML5 issues are unrelated to the task and have now been logged separately. Thank you for raising that.
Closing issue - verified in IDE v2024.1100.0.631 Runtime v2024.1100.0.656
As per comments above, not implemented in HTML5, but crashes have been fixed
Is your feature request related to a problem?
The primary aims of the audio decoding threads are:
Our decoding threads do achieve these things. However, they also do some other things, such as:
The result of this is that many simple processes in the audio engine (e.g. stopping a sound) become an async process where we have to defer actions to the decoding threads. On top of this, we need to present these async processes as synchronous to users, so we have to build up rather complex systems to achieve this.
Additionally, the decoding threads have a few problematic behaviours such as 'detecting' buffer underruns, leading to potentially incorrect source state overrides, and partial deinitialisation of voice subsystems, which steps on the feet of the main thread. There is also a tight coupling of the stream states and source states which doesn't make much sense as these are two mostly unrelated systems.
Finally, we have at minimum 6 different threads that access the audio context (plus more depending on other features that users may/may not be utilising). This presents problems such as increased contention for the context (which is handled by only a single mutex), and loss of error state (which is intended to be read from a single thread).
Describe the solution you'd like
The decoding threads already do the main things they are supposed to, and generally in a reasonable way as well. However, moving the responsibility of certain tasks to the main/audio threads will help to untangle the code somewhat.
We should decouple the decoding threads entirely from the audio context. Decoding threads should not really need to know about or interact with the audio context, or even the engine more generally as they are essentially a specialised file/stream reader.
Specific changes we would make here would include:
Additional context
This isn't a huge change, but it should help us towards cleaning up a more awkward aspect of our voice handling. Improvements to voice handling more generally will help us to pave the way for features that rely heavily on voice management, such as audio scheduling. This in turn will allow us to address issues in existing features such as audio queues and sync groups, which can only be used in limited ways and are mostly isolated from the other parts of the audio engine.