AlphaTab currently does not respect the AudioContext.outputLatency describing the timely difference between us passing the audio samples to the context and the actual time it will be heard by the user.
This leads to a shift of the cursor and seen audio.
AlphaTab should correctly expose the time and tick position according to the heard audio on the synthesizer area. Internally it might still need to keep its "synth" timeline. Things to consider:
It seems the audio latency is not static but can change any time.
There is a time axis within the midi sequencer, and one on the synth level.
The sequencer position decides which midi events to play next for audio generation.
The one on synth level is handling the actual audio position according to the samples played by the output. Here we likely need adjustment
The synth level audio is also used to determine whether the playback finished (and potentially seek back to the start of the desired playback range - if looping is active).
When looping and jumping/seeking back to an older position, we need to be careful on just applying the latency wrongly. We might be stopping/seeking too early resulting in a strange cursor jump/positioning.
Testing might be tricky as I'm not aware of a way to modify the latency.
See https://github.com/CoderLine/alphaTab/discussions/1454
AlphaTab currently does not respect the
AudioContext.outputLatency
describing the timely difference between us passing the audio samples to the context and the actual time it will be heard by the user.This leads to a shift of the cursor and seen audio.
AlphaTab should correctly expose the time and tick position according to the heard audio on the synthesizer area. Internally it might still need to keep its "synth" timeline. Things to consider: