I'm merge video using the WebCodecs API, The video track works fine, but the audio track of the second file starts playing at the 0th second (the first file has no audio track).
data flow: AudioData -> AudioEncoder -> EncodecAudioChunk -> mp4box.js addSample
I checked that the timestamp property of EncodecAudioChunk is correct, but the generated mp4 file, the audio track is not offset.
const encoder = new AudioEncoder({
error: console.error,
output: (chunk) => {
const buf = new ArrayBuffer(chunk.byteLength)
chunk.copyTo(buf)
// The first `chunk.timestamp` is equal to the duration of the first video
const dts = chunk.timestamp
mp4File.addSample(trackId, buf, {
duration: chunk.duration ?? 0,
dts,
cts: dts,
is_sync: chunk.type === 'key'
})
}
})
In the end I had to create an empty AudioData placeholder for the first video to solve the problem.
I'm merge video using the WebCodecs API, The video track works fine, but the audio track of the second file starts playing at the 0th second (the first file has no audio track).
data flow: AudioData -> AudioEncoder -> EncodecAudioChunk -> mp4box.js addSample
I checked that the timestamp property of EncodecAudioChunk is correct, but the generated mp4 file, the audio track is not offset.
In the end I had to create an empty AudioData placeholder for the first video to solve the problem.
Is there any better solution?