Closed frishu closed 3 weeks ago
Hallo!
No idea what you are talking about, your code works fine for me! The source video of the Schlossbergbahn is 12 seconds long, and you're encoding the audio twice, so the resulting file is 24 seconds long, which it is. And it's the same audio playing twice in a row.
The only mismatch is that the second audio starts playing after 12 seconds and not after 10 (which you specified in your code), but that is because you have already encoded 12 seconds of audio beforehand using the encoder. There's no way for the encoder to "go back in time", so the best it can do is to start the next audio at 12 seconds.
You said your intention was to have "delayed audio tracks" - by that do you mean that you want your two audio tracks to overlap? Like, play at the same time? IF this is the case, you'll have to get more advanced than just working with AudioData
. Playing multiple things at the same time means you'll need to get into audio mixing. The Web Audio API is perfect for that with its audio context: Create an OfflineAudioContext, create one source buffer node for each input buffer, and then schedule the audio to play. So, the first Node would be scheduled with .start(0)
, the second with .start(10)
. Then, render the audio out into a final AudioBuffer, and then you can turn that AudioBuffer back into AudioData to pipe into the encoder, like you're already doing.
This may help:
// Mix two audio buffers into one using OfflineAudioContext
async function mixAudioBuffers(audioBuffer1, audioBuffer2, delayInSeconds) {
const sampleRate = audioBuffer1.sampleRate;
const numberOfChannels = audioBuffer1.numberOfChannels;
// Determine the total length needed to fit both buffers
const totalLength = Math.max(audioBuffer1.length, audioBuffer2.length + delayInSeconds * sampleRate);
// Create an OfflineAudioContext to mix the buffers
const offlineContext = new OfflineAudioContext(numberOfChannels, totalLength, sampleRate);
// Create buffer source nodes for both audio buffers
const source1 = offlineContext.createBufferSource();
source1.buffer = audioBuffer1;
source1.start(0); // Start immediately
const source2 = offlineContext.createBufferSource();
source2.buffer = audioBuffer2;
source2.start(delayInSeconds); // Start after the delay
// Connect the sources to the context's destination
source1.connect(offlineContext.destination);
source2.connect(offlineContext.destination);
// Render the mixed output
return await offlineContext.startRendering();
}
Ah, damn!
My bad, I didn't ensure that the video I took as example was longer than the 10 seconds. I actually didn't meant mixing, but this was actually the following step I wanted to figure out. Thank you already!
With the delay I meant that:
After the first audio passed (12s) I make a delay of a few seconds (let's say 2) where no audio is being played and then add another (or the same doesn't really matter) track. This would result in 26 seconds of audio where in between there's no sound.
Adjusting the code above it should look like this to make it more clear.
const audioData1 = createAudioData(audioBuffer1, 0); // 0s-12s
const audioData2 = createAudioData(audioBuffer2, 14); // 14s-26s
Any conceivable scheduling of audio can be implemented using the OfflineAudioContext
method I showed you above, including the one with the two seconds of silence. The silence needs to come from somewhere, and if you do this using the audio context then you'll have 2 seconds of silence after the first audio. This is better than having a "gap" in the audio chunks in the final encoded media, since that's just kinda... weird. And I wouldn't count on all media players dealing with that in the same way; perhaps some simply contract the silence. So, better to encode silence explicitly!
I'll close this issue as I think you have enough info to solve your problem. If there's anything more, feel free to ask!
Hi, hallo,
I'm currently experimenting with web codecs and have encountered this library through a blog post and I decided to give it a try to simplify the whole learning process. Everything related to the video works as I want it to but I'm having difficulties when it comes to audio.
I'd like to have delayed audio tracks but they're played after each other. The AudioData timestamp looks like it's ignored and I'm unsure if the issue is the person that just opened this issue or something related therefore I hope for some hints / solutions.
Short: Audio tracks aren't delayed although high timestamp difference.
Reproduction:
Thanks for support and the great library!