Closed avhasib closed 7 months ago
Yes, of course there is. This has more with AudioEncoder than this library, but this is how:
We already have an AudioEncoder, so now our goal is to create AudioData (expected by .encode) containing our audio file that we can encode using it.
// Fetch the audio file as an array buffer
const response = await fetch(url);
const audioBuffer = await response.arrayBuffer();
// Decode the audio data
const audioContext = new AudioContext();
const audioData = await audioContext.decodeAudioData(audioBuffer);
// Create a new Float32Array to hold the planar audio data
const numChannels = audioData.numberOfChannels;
const lengthPerChannel = audioData.length;
const planarData = new Float32Array(numChannels * lengthPerChannel);
// Fill the Float32Array with planar audio data
for (let channel = 0; channel < numChannels; channel++) {
const channelData = audioData.getChannelData(channel);
planarData.set(channelData, channel * lengthPerChannel);
}
// Construct an AudioData object
const audioData = new AudioData({
format: 'f32-planar',
sampleRate: audioData.sampleRate,
numberOfFrames: lengthPerChannel,
numberOfChannels: audioData.numberOfChannels,
timestamp: 0,
data: planarData
});
// With the encoder set up:
encoder.encode(audioData);
And you're done!
The above code is worked like charm, Thank you👍, as AudioEncoder is not supported in iOS, can we use any polyfills to encode audio in iOS like Audio Recorder Polyfill or something like that. currently ios encoded files are without audio. or is there any way to directly use the audio file as video audio or any converted form of audio file without using audio encoder.
Yeeah there should be, you can use the .addAudioChunkRaw method on the Muxer to add raw bytes directly. But note it's unlikely you'll be able to add just a straight up audio file, but instead we need raw encoded audio data. I think there was already an issue similar to this, hold on...
Here. This is about adding an AAC file directly without encoding it, which we ended up getting to work. It required a bit of manual slicing of the audio file. If your audio file is not AAC, then I don't know. But since you're recording the audio, I guess you should be able to write it as AAC?
Audio file is aac, reference from the previous issue helped to encode video and add raw audio from aac file but the resulted video file is not playing in all players, it can be played in the chrome browser, VLC player, pot Player but cannot play in windows media player and ios default player and while sending file through whatsapp web in chrome does not recognize it as a video and sending it as document, while opening in whatsapp web windows app it showing video preview without audio.
file sample attached https://github.com/Vanilagy/mp4-muxer/assets/26264087/94974da1-8aa3-4dde-a877-728e58f42e16
I'm sorry, I've been busy! I'll get to this issue when I have the time.
@avhasib Sorry for not getting back to you, I kinda forgot about this issue. Do you still require help, or has this issue been resolved? :)
What audio file formats are supported? mp3, ogg and wav? We are using mp3 files... Thanks a lot.
@Vanilagy unfortunately Issue is still there
What precisely is the issue?
You'll need to be a bit more precise!
Yes, of course there is. This has more with AudioEncoder than this library, but this is how:
We already have an AudioEncoder, so now our goal is to create AudioData (expected by .encode) containing our audio file that we can encode using it.
// Fetch the audio file as an array buffer const response = await fetch(url); const audioBuffer = await response.arrayBuffer(); // Decode the audio data const audioContext = new AudioContext(); const audioData = await audioContext.decodeAudioData(audioBuffer); // Create a new Float32Array to hold the planar audio data const numChannels = audioData.numberOfChannels; const lengthPerChannel = audioData.length; const planarData = new Float32Array(numChannels * lengthPerChannel); // Fill the Float32Array with planar audio data for (let channel = 0; channel < numChannels; channel++) { const channelData = audioData.getChannelData(channel); planarData.set(channelData, channel * lengthPerChannel); } // Construct an AudioData object const audioData = new AudioData({ format: 'f32-planar', sampleRate: audioData.sampleRate, numberOfFrames: lengthPerChannel, numberOfChannels: audioData.numberOfChannels, timestamp: 0, data: planarData }); // With the encoder set up: encoder.encode(audioData);
And you're done!
@Vanilagy , I can't hear any audio in the generated MP4 file with this code. Am I missing something? I receive chunks only two times in muxer audioChunk callback.
Not sure. What issue was this from again, my response? Looking at my code now, I don't see any obvious issues with it.
The AudioDecoder typically spits out a lot of chunks per second, so only two chunks would either mean your audio is very short or there is some bug in this code.
Yes, of course there is. This has more with AudioEncoder than this library, but this is how:
We already have an AudioEncoder, so now our goal is to create AudioData (expected by .encode) containing our audio file that we can encode using it.
// Fetch the audio file as an array buffer const response = await fetch(url); const audioBuffer = await response.arrayBuffer(); // Decode the audio data const audioContext = new AudioContext(); const audioData = await audioContext.decodeAudioData(audioBuffer); // Create a new Float32Array to hold the planar audio data const numChannels = audioData.numberOfChannels; const lengthPerChannel = audioData.length; const planarData = new Float32Array(numChannels * lengthPerChannel); // Fill the Float32Array with planar audio data for (let channel = 0; channel < numChannels; channel++) { const channelData = audioData.getChannelData(channel); planarData.set(channelData, channel * lengthPerChannel); } // Construct an AudioData object const audioData = new AudioData({ format: 'f32-planar', sampleRate: audioData.sampleRate, numberOfFrames: lengthPerChannel, numberOfChannels: audioData.numberOfChannels, timestamp: 0, data: planarData }); // With the encoder set up: encoder.encode(audioData);
And you're done!
Where do I need to add this in the code
Depends on your code! Just wherever you have the encoder (and therefore muxer) already set up.
Depends on your code! Just wherever you have the encoder (and therefore muxer) already set up.
it's the same one in demo of your repository
Just put after the initialization of the AudioEncoder.
Just put after the initialization of the AudioEncoder.
const startRecording = async () => {
if (typeof VideoEncoder === 'undefined') {
alert("Looks like your user agent doesn't support VideoEncoder / WebCodecs API yet.");
return;
}
startRecordingButton.style.display = 'none';
if (typeof AudioEncoder !== 'undefined') {
try {
let userMedia = await navigator.mediaDevices.getUserMedia({
video: false,
audio: true
});
audioTrack = userMedia.getAudioTracks()[0];
} catch (e) {}
if (!audioTrack) console.warn("Couldn't acquire a user media audio track.");
} else {
console.warn('AudioEncoder not available; no need to acquire a user media audio track.');
}
endRecordingButton.style.display = 'block';
let audioSampleRate = audioTrack?.getCapabilities().sampleRate.max;
muxer = new Mp4Muxer.Muxer({
target: new Mp4Muxer.ArrayBufferTarget(),
video: {
codec: 'avc',
width: canvas.width,
height: canvas.height
},
audio: audioTrack ? {
codec: 'aac',
sampleRate: 48000,
numberOfChannels: 2
} : undefined,
fastStart: 'in-memory',
firstTimestampBehavior: 'offset'
});
videoEncoder = new VideoEncoder({
output: (chunk, meta) => muxer.addVideoChunk(chunk, meta),
error: e => console.error(e)
});
videoEncoder.configure({
codec: 'avc1.4D4028',
width: canvas.width,
height: canvas.height,
bitrate: 1e6
});
if (audioTrack) {
audioEncoder = new AudioEncoder({
output: (chunk, meta) => muxer.addAudioChunk(chunk, meta),
error: e => console.error(e)
});
audioEncoder.configure({
codec: 'mp4a.40.2',
numberOfChannels: 1,
sampleRate: audioSampleRate,
bitrate: 128000
});
let trackProcessor = new MediaStreamTrackProcessor({
track: audioTrack
});
let consumer = new WritableStream({
write(audioData) {
if (!recording) return;
audioEncoder.encode(audioData);
audioData.close();
}
});
trackProcessor.readable.pipeTo(consumer);
}
audioEncoder = new AudioEncoder({
output: (chunk, meta) => muxer.addAudioChunk(chunk, meta),
error: e => console.error(e)
});
audioEncoder.configure({
codec: 'mp4a.40.2',
numberOfChannels: 2,
sampleRate: 48000,
bitrate: 128000
});
// Fetch the audio file as an array buffer
const response = await fetch('sample.aac');
const audioBuffer = await response.arrayBuffer();
// Decode the audio data
const audioContext = new AudioContext();
const audioData = await audioContext.decodeAudioData(audioBuffer);
console.log(audioData)
// Create a new Float32Array to hold the planar audio data
const numChannels = audioData.numberOfChannels;
const lengthPerChannel = audioData.length;
const planarData = new Float32Array(numChannels * lengthPerChannel);
// Fill the Float32Array with planar audio data
for (let channel = 0; channel < numChannels; channel++) {
const channelData = audioData.getChannelData(channel);
planarData.set(channelData, channel * lengthPerChannel);
}
// Construct an AudioData object
const audiosData = new AudioData({
format: 'f32-planar',
sampleRate: audioData.sampleRate,
numberOfFrames: lengthPerChannel,
numberOfChannels: audioData.numberOfChannels,
timestamp: 0,
data: planarData
});
// With the encoder set up:
audioEncoder.encode(audiosData);
audiosData.close()
startTime = document.timeline.currentTime;
recording = true;
lastKeyFrame = -Infinity;
framesGenerated = 0;
canvasInterval = setInterval(updateCanvas, duration);
encodeVideoFrame();
intervalId = setInterval(encodeVideoFrame, 1000 / 30);
const totalDuration = images.length * duration;
endRecordingTimeoutId = setTimeout(endRecording, totalDuration);
};
in here like this but it gives this error
mp4-muxer.js:1226 Uncaught Error: No audio track declared. at Muxer.addAudioChunkRaw (mp4-muxer.js:1226:15) at Muxer.addAudioChunk (mp4-muxer.js:1221:12) at output (scripts.js:203:44)
The error message is a pointer to what's wrong.
I'm sorry, but it is not my job to debug your application code. I would suggest you take this issue to ChatGPT, which should help you adequately with the types of issues you're facing! And it's infinitely patient :)
after searching and reading documentations for such solutions in google for a day nothing found