Amsterdam-Music-Lab / MUSCLE

An application to easily set up and run online listening experiments for music research.
https://www.amsterdammusiclab.nl/
MIT License
4 stars 1 forks source link

Investigate timing precision #100

Open BeritJanssen opened 2 years ago

BeritJanssen commented 2 years ago

The audio is now "sort of" precise, but will have to be very precise and reliable by the end of the project.

Evert-R commented 2 years ago

https://wavesurfer-js.org/ (audioplayer to investigate)

BeritJanssen commented 1 year ago

Perhaps run lab experiments in collobaration with Dirk Vet?

jaburgoyne commented 1 year ago

This could and should ultimately be a real-life experiment with equipment from the speech lab

jaburgoyne commented 1 year ago

(Maybe also look at video timing if it would be very similar to implement)

Evert-R commented 1 year ago

There are 2 ways we can play sound with webaudio: streaming and buffered.

Buffers are used for samples up to 45 seconds and provide a method to schedule sounds in the future using the timing model of web-audio. (for instance to sequence audio to create a rhythm.)

Streaming is the method we use now. The audio can be played when enough data is loaded, so the file can play up to its end without having to stop for further buffering of content.

The streaming method uses the AUDIO HTML tag we already use now.

Webaudio has its own timing model which differs from the javascript timing model.

What I think we can change to have more precise timing, especially for bluetooth audio devices, is to use the timing model and output latency property of the AudioContext to:

The audio of a Video can also be routed through webaudio for better synchronization

AudioContext properties:

baseLatency is a floating point number that represent the number of seconds the AudioContext uses for its internal processing, once the audio reaches the AudioDestinationNode. In Firefox, this is always 0 because we process the audio directly in the audio callback.

outputLatency is another floating point number that represents the number of seconds of latency between the audio reaching the AudioDestinationNode (conceptually the audio sink of the audio processing graph) and the audio output device. This is the number that varies widely depending on the setup.

getOutputTimestamp is a method that returns a JavaScript dictionary that contains two members: contextTime is in the same unit and clock domain as AudioContext.currentTime, and is the sample-frame that is currently being output by the audio output device, converted to seconds. performanceTime is the same number, but in the same clock domain and unit as performance.now() (in milliseconds).

BeritJanssen commented 10 months ago

The programmatic side is done; manual testing may still be needed.