Open OmarSastec opened 4 years ago
Use WebRTC MediaRecorder to generate canvas blobs and send them to microservice responsible of appending the output to a final video. https://webrtc.github.io/samples/src/content/getusermedia/record/
I'm wondering why this solution wasn't adopted or what are the drawbacks ?
What do you think is the difference? Just offloading jibri VM by moving the ffmpeg process on another service?
Use WebRTC MediaRecorder to generate canvas blobs and send them to microservice responsible of appending the output to a final video. https://webrtc.github.io/samples/src/content/getusermedia/record/ I'm wondering why this solution wasn't adopted or what are the drawbacks ?
What do you think is the difference? Just offloading jibri VM by moving the ffmpeg process on another service?
I was thinking about offloading jibri VM via the elimination of re-encoding stage. The resulting video will be based on a concatenated set of blobs, a webm output in this case.
The other service will take care of concatenating Blobs of recordings into single file capable of playback.
If you want to skip a re-encoding stage, you will lose the UI of the meeting. If you are able to always get the encoded video of the on-stage participant and save it you will lose all the rest of the UI view. I don't see MediaRecorder doing anything different than jibri, it takes the canvas and encodes it the same as ffmpeg get what is seen in the X server and encodes it.
@OmarSastec would you be so kind as to share your setup and configuration for the Jibri setup using the containers? I have not been able to find many resources on the subject. Also is there any chance of offloading the ffmpeg to the integrated gpu on your chip says it has Intel® HD Graphics P530.
@OmarSastec you can disregard my last question, I found the information I was looking for, on the docker-jitsi-meet page, not sure how I missed that.
Description
After the containerization of JIBRI, we succeeded to run multiple recording/streaming sessions at the same time and in the same VM. Actually, we're aiming to run tens of recording/streaming sessions in a single VM. We can hardly reach 6 recording sessions at the same time in a machine providing a CPU model
Intel(R) Xeon(R) CPU E3-1245 v5 @ 3.50GHz
due to CPU throttling (100% usage)We're looking for a solution to reduce the consumption
Current behavior
ffmpeg is consuming a considerable amount of resources when re-encoding a video for streaming/recording.
Expected Behavior
A low consumption will be more favorable to handle tens of recording sessions at the same time
Possible Solution
Use WebRTC MediaRecorder to generate canvas blobs and send them to microservice responsible of appending the output to a final video. https://webrtc.github.io/samples/src/content/getusermedia/record/
I'm wondering why this solution wasn't adopted or what are the drawbacks ?