Closed excing closed 1 year ago
Hey!
So what you'll want to do is combine (mix) the two audio signals before encoding and muxing them, not muxing them separately and trying to combine them later. We can do this easily using the WebAudio API, by just hooking up some MediaStreamSourceNodes to a MediaStreamDestinationNode:
let audioContext = new AudioContext()
// Create MediaStreamSource nodes
let micStream = await navigator.mediaDevices.getUserMedia({ audio: true })
let micSource = audioContext.createMediaStreamSource(micStream)
let micGain = audioContext.createGain()
let displayStream = await navigator.mediaDevices.getDisplayMedia({ video: true, audio: true })
let displaySource = audioContext.createMediaStreamSource(displayStream)
let displayGain = audioContext.createGain()
// Create the MediaStreamDestination
let destination = audioContext.createMediaStreamDestination()
// Connect the microphone source to gain node and destination
micSource.connect(micGain)
micGain.connect(destination)
// Connect the display source to gain node and destination
displaySource.connect(displayGain)
displayGain.connect(destination)
// Set whatever volumes you want
micGain.gain.value = 1
displayGain.gain.value = 0.7
// Create the MediaStreamTrackProcessor
let trackProcessor = new MediaStreamTrackProcessor(destination.stream.getAudioTracks()[0])
let consumer = new WritableStream({
write(audioData) {
// Assuming the AudioEncoder and stuff are set up already
audioEncoder.encode(audioData);
audioData.close();
}
});
trackProcessor.readable.pipeTo(consumer);
I added some GainNodes to also control the volume, which you might wanna do.
For your second question, of course it is. You can use a second MediaStreamTrackProcessor to get VideoFrames from a video track, which is likely what you want to do for capturing the recorded screen. Additionally, you can create VideoFrames manually. You're not limited by canvas, that's just something I did in my demo. By my knowledge, I think you can also draw a MediaStream directly to a canvas by going through a <video>
element with its srcObject
set. You could then also use that to generate a new VideoFrame.
Do you still need help or can I close this issue?
Hi there! I'm wondering how to use this library for screen recording since I'm not using Canvas. Also, I'll be speaking into a microphone while recording and I'd like to merge the audio from the microphone with the video. Can you guide me on how to do that? Thanks!
I have a couple of questions. Can this library merge two audio segments into one media file? And is it possible to process videos without using Canvas?