museumsvictoria / spatial_audio_server

An audio backend for the multi-layered soundscape of Beyond Perception: Seeing the Unseen, a permanent exhibition at Scienceworks in Melbourne, Australia.
81 stars 14 forks source link

Crackling #171

Open freesig opened 6 years ago

freesig commented 6 years ago

Getting crackling on large project

freesig commented 6 years ago

Buffering Issue

It's taking too long to read from disk. Here: line 562 let samples_yielded = sound.signal.samples().take(num_samples).count(); & -line 583 for sample in sound.signal.samples().take(num_samples) {

Ideas

Make another source (in audio/source) that buffers more audio into memory, ahead of whats needed. We still have plenty of spare memory.

mitchmindtree commented 6 years ago

I've just been looking through the code and I think threading this samples stream type (the source::Signal for the Wav type source) might be the easiest approach. So that rather than requesting samples directly from the hound::WavReader, it pulls samples from a std::sync::mpsc::Receiver that is being fed by a different thread that is doing the actual IO. Will try to get back online tomorrow night to help out.

mitchmindtree commented 6 years ago

One thing that will be affected by threading the WAV file reading is the WAV file "Seek"ing which is performed to synchronise the Continuous WAV sounds at the beginning of the render loop. I think it should be OK to either:

  1. only seek the WAV file the first time the audio::output::render is called for the sound or
  2. only seek the WAV reader when the sound is first created.
  3. only seek the WAV when the audio output thread first receives the sound via the insert_sound method.

Option 1 might be a no-go as once seeking, the audio thread would have to either wait for the file reader thread to seek to the new WAV position and send over the new samples or write silence for the first buffer.

Option 2 might require changing the frame_count field on the audio::output::Model to an Arc<AtomicUsize> so that it can be shared between the audio output thread (that steps the count when render is called) and the threads creating the Sound (the soundscape and gui threads).

Option 3 (just came to me after thinking about option 2) is probably the best way to go. It should be the easiest and most accurate approach as we have direct access to the current frame_count (via the audio::output::Model). We should just have to add a check to insert_sound which checks for the Wav source kind and if it is, seek to the correct position based on the framecount (frame_count % wav length).

freesig commented 6 years ago

Ok this is good to know. I've been thinking as well. It could be useful to use a double or triple buffer. Similar to ASIO. So internally it's filling a buffer while the other one is being drained over the channel. Then as soon as it runs out internally it just swaps out the buffer and starts filling the empty one. We could then play with buffer sizes, amount of buffers to get the best speed. I have class from now till about 3 or 4 then I'll spend the rest of the afternoon / night implementing this

freesig commented 6 years ago

for reference later Also I think buffers are only reused within individual sounds currently - could be worth trying to share them between all sounds so that we don't have to reallocate those buffers each time a sound is spawned