During the decoupling of this polyfill from our player, we removed a feedback loop that would get the key frame time from the JS dash player, and then:
1) do the actual seeking on a key frame
2) while making sure everything went well if audio and video segments were not aligned (basically making sure we didn't append an audio segment that started before the chosen key frame).
Failing to ensure those 2 points would cause NetStream to crash in the weirdest ways.
A better way to work around these limitations is to give every frame located before the seek target the timestamp of the seek target itself.
This can be done in the transmuxer: this loop goes over every frame to write them in a ByteArray
I don't think we broke anything else during the decoupling phase, but a good way to start might be to seek programmatically exactly to the beginning of a DASH segment (expecting that it starts with a key frame), using a video only stream.
During the decoupling of this polyfill from our player, we removed a feedback loop that would get the key frame time from the JS dash player, and then: 1) do the actual seeking on a key frame 2) while making sure everything went well if audio and video segments were not aligned (basically making sure we didn't append an audio segment that started before the chosen key frame). Failing to ensure those 2 points would cause NetStream to crash in the weirdest ways.
A better way to work around these limitations is to give every frame located before the seek target the timestamp of the seek target itself.
This can be done in the transmuxer: this loop goes over every frame to write them in a ByteArray
I don't think we broke anything else during the decoupling phase, but a good way to start might be to seek programmatically exactly to the beginning of a DASH segment (expecting that it starts with a key frame), using a video only stream.