Closed Arzar closed 2 years ago
@starkwiz how to disable what you mention in point number 2? The blur video background ?
In your jisti-meet website folder. Edit interface_config.js and set DISABLE_VIDEO_BACKGROUND to true as below. Then reload/restart nginx service.
/**
* Whether or not the blurred video background for large video should be
* displayed on browsers that can support it.
*/
DISABLE_VIDEO_BACKGROUND: true,
Nice work @starkwiz...I think that probably also explains why we haven't seen this issue but others have (since it comes down to the hardware being used to run Jibri). I'm sure there's also a parameter we could pass to limit that queue size, though that will likely trade off for other problems--still it could be interesting to experiment with.
Yes, that would be interesting but still I doubt it would help because in a way it happens because cpu is just not able to process the captured video in realtime, so the queue will just keep on increasing and if it doesnt we can possible lose frames because eventually it has to keep those frames somewhere to process, I am not sure if its possible to create a temporary filebuffer of sort and then ffmpeg will keep processing the captured audio/video even after the meeting ends at the same time not putting too much pressure on the cpu. For past 2 days I had not alsa buffer xrun issue but today there were two such failures, I am not exactly sure yet what other factor could cause cpu spike other than that video background thing. Maybe I should go little easy on frame rates as well, like 25 fps. I've been also researching on ffmpeg parameters and it appears that using libx264rgb so there is no colour conversion when capturing from xorg. Has anyone tried this ? and is there any issue with output mp4 being incompatible with any players or the colours etc. ?
I honestly don't know what else to do...
just start recording, the processing and memory consumption of my machine will go up !!
BEFORE
AFTER
Could someone help me? I can pay to fix this
I am guessing it's mostly the google chrome version but I can help fix the issue. DM me on twitter through my profile here if you want me to take a look.
Note: The ffmpeg version in Debian 10 has been updated in a recent point release and now it's based on upstream version 4.1.6. It would be interesting to know if those who had to rebuild ffmpeg from source to workaround this problem still need to do so or, on the contrary, version 4.1.6 in Debian now works well enough for them.
We are controlling the Ram allocation and the amount of CPU consumed by ffmpeg at videoencrypt.com using "nice", but only to some extent. Like the server would not go down , but it leads to longer video processing times. Still looking for a better solution.
Here the same. VirtualServer with U18.04, 4 Cpu's, 8GB RAM Very interesting is, if i set "disableThirdPartyRequests: true," (Gravatar) in
/etc/jitsi/meet/meet.mydomain.com-config.js
my memory usage is stable.Can anybody confirm this?
Oh my god, well done! Disabled ThirdPartyRequests and VideoBackground = recorded 12 minutes of video with 2 devices connected at 1022x1108 and Jibri used something like 800MB of RAM. Love you guys🚀
And year later, I arrived here for some answers, with 100s of question, why jibri is designed with so much of hacks.
I am goona try what you suggested @starkwiz
And I have some queries :
And year later, I arrived here for some answers, with 100s of question, why jibri is designed with so much of hacks.
I am goona try what you suggested @starkwiz
And I have some queries :
- Is there any relation between number of participants and amount of resource cpu/memory jibri components use ?
- tangent question: Is there anyone working on alternative ways of recording ??
The CPU and RAM load depend on the amount of participants in a meeting. It's quite straightforward as it happens the same with your Desktop: Chrome needs more CPU on a large meeting with 30 ppl than it needs on a meeting with just 3.
The main rationale for why we use chrome as a compositor for recording with Jibri is that it's the best method we have to go from multiple WebRTC streams of audio and video to a single video with one audio and video stream. Any recorder will need to composite the videos, chose the active speaker, mix the audio, etc. Chrome happens to already do this, and the jitsi-meet client is custom-built for this job, so re-using it for recording has been the best method without needing to support a whole separate client. Would it be possible to do in a separate client? Absolutely, but then said client would need to be regularly updated when new features of Jitsi or Chrome dropped. So, it's a reasonable question to ask why it's designed this way, but the short answer is that with a small team, this was our best answer.
Thanks for verbose answer @aaronkvanmeerten :bow: , I beefed up the machine to 4 cores 8gb ram, now recorder/ffmpeg seems ok, but i see lots of cpu usage for the google chrome process when some start sharing their screen.
Is this the expected behavior ?
Might not be the right thread, but still here is quick summary of it all.
/opt/google/chrome/chrome --type=gpu-process --field-trial-handle=8281191735735278371,1311878189472908780,131072 --enable-logging --log-level=0 --user-data-dir=/tmp/.com.google.Chrome.q3HCh0 --gpu-preferences=QAAAAAAAAAAgAAAQAAAAAAAAAAAAAAAAAABgAAAAAAAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAgAAAAAAAAA --enable-logging --log-level=0 --shared-files
i am using latest version of chrome, chromedirver, & jibri
apt info jibri
Package: jibri
Version: 8.0-61-g99288dc-1
Priority: optional
Section: net
Maintainer: dev@jitsi.org
Installed-Size: 49.2 MB
Depends: default-jre-headless | java8-runtime-headless | java8-runtime, ffmpeg, curl, alsa-utils, icewm, xdotool, xserver-xorg-input-void, xserver-xorg-video-dummy
Download-Size: 44.2 MB
APT-Manual-Installed: yes
APT-Sources: https://download.jitsi.org stable/ Packages
Description: Jibri
Jibri can be used to capture data from a Jitsi Meet conference and record it to a file or stream it to a url
N: There are 4 additional records. Please use the '-a' switch to see them.
$
$ chromedriver --version
ChromeDriver 88.0.4324.96 (68dba2d8a0b149a1d3afac56fa74648032bcf46b-refs/branch-heads/4324@{#1784})
$
$ google-chrome --version
Google Chrome 88.0.4324.96
I had the same issue with the RAM all used up quickly. Upgrading RAM and CPU on the vServer just postpones the issue to a later point in time.
The issue could be resolved by compiling ffmpeg from source. It works beautifully now using around 1G of RAM only. Here's the instructions: http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Alltogether the system runs best with 8GB of RAM or more.
I had the same issue with the RAM all used up quickly. Upgrading RAM and CPU on the vServer just postpones the issue to a later point in time.
The issue could be resolved by compiling ffmpeg from source. It works beautifully now using around 1G of RAM only. Here's the instructions: http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu
Alltogether the system runs best with 8GB of RAM or more.
Is there any particular flag you passed when building to help with memory use?
I had the same issue with the RAM all used up quickly. Upgrading RAM and CPU on the vServer just postpones the issue to a later point in time. The issue could be resolved by compiling ffmpeg from source. It works beautifully now using around 1G of RAM only. Here's the instructions: http://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu Alltogether the system runs best with 8GB of RAM or more.
Is there any particular flag you passed when building to help with memory use?
From the flags in the compilation guide I had just omitted a few codecs I didn't need and didn't want to compile. Otherwise the flags were used according to the article linked above. I may add that I did not investigate further yet, I am sure ffmpeg could be compiled even much slimmer omitting more flags and libs for dedicated use with jibri. We may want to document this somewhere some day.
Btw, RAM usage with Jibri recording was around 1 to 1,1G, RAM usage has dropped now to 0,9G when the server is idling with its other tasks. So the RAM usage for Jibri was in line with what I would imagine to be proportionate.
Our only real solution was to increase vCPU's number. Increasing from 4 to 8 CPU's was the fix we used last time. It seems as if ffmpeg begins eating memory when the CPU's are not giving enough power to it...
Here the same. VirtualServer with U18.04, 4 Cpu's, 8GB RAM Very interesting is, if i set "disableThirdPartyRequests: true," (Gravatar) in
/etc/jitsi/meet/meet.mydomain.com-config.js
my memory usage is stable. Can anybody confirm this?Oh my god, well done! Disabled ThirdPartyRequests and VideoBackground = recorded 12 minutes of video with 2 devices connected at 1022x1108 and Jibri used something like 800MB of RAM. Love you guys🚀
I think this work due to the 4 CPU setting. One session with 2 devices won't cause you any issue. In my experiments, 2 vCPU without scaling down resolution is almost the minimum requirement. Once you see CPU usage around 100% and see "waited" run queue, you know ffmpeg is piling up frames then memory quickly goes down.
And year later, I arrived here for some answers, with 100s of question, why jibri is designed with so much of hacks.
I am goona try what you suggested @starkwiz
And I have some queries :
- Is there any relation between number of participants and amount of resource cpu/memory jibri components use ?
- tangent question: Is there anyone working on alternative ways of recording ??
Different from other conference providers, Jitsi-meet is the only true SFU allowing multiple sharing screens at the same time. Therefore Jibri is designed as a silent participant to record "the whole" process including both A/B who share screen, instead of a very high quality video recorded at A's side (maybe the host) but missing what B shared during the conference.
I'm also curious about the resource consumption rate (CPU/Memory/Network) but the number of participants make a large difference. For example, when users double up, network consumption goes up rapidly but CPU/Memory won't double up.
Best suggestion is never let CPU usage > 80%, no matter how customers are using the platform. Once trigger CPU wait, everything goes down for all onging meetings.
And year later, I arrived here for some answers, with 100s of question, why jibri is designed with so much of hacks. I am goona try what you suggested @starkwiz
Yes, please let me know how you go.
And I have some queries :
- Is there any relation between number of participants and amount of resource cpu/memory jibri components use ?
Increase in number of participants means load on the network bandwidth mainly as number of video/audio streams increase but this is mostly offloaded to jitsi video bridge. So, not much load on the recorder/jibri server maybe unless there are more than 5+ participants. This is based on my understanding of Jitsi and I could be wrong. But I think Jitsi is quite optimised to handle multiple video/audio streams, if you have properly balanced all the components of jitsi, especially jitsi video bridge.
- tangent question: Is there anyone working on alternative ways of recording ??
No, I don't think Jitsi team is working on alternate ways of recording because of the way jitsi works.
@Arzar The solution is simple. Limit the processing requests like hard and soft limit for specific user like jibri. The file can be found at: /etc/security/limits.conf
then set the correct value. In my case i set the jibri user to only consume 1 cpu core.
jibri hard core 1 jibri soft core 1
After saving the file, the processing made by jibri user should not be exceed above 1 cpu core.
then set the correct value. In my case i set the jibri user to only consume 1 cpu core.
Have you tried it? The reason ffmpeg eats all memory is because it doesn't have enough CPU power to process all the frames. A more suitable solution would be to lower the quality of the stream/recording such that the CPU is able to keep up and doesn't have to queue jobs, which eventually takes up all RAM.
Yes, I've already tested it today, But after the limit is reach ffmpeg is being terminated. And If that's the case, the only way to avoid ffmpeg being terminated is to increase the capacity of the server. The minimum specs of jibri per instance is 16GB RAM and 4 vCPU Cores.
Or you can just leave the jibri and integrate your jitsi-meet frontend to RecordRTC. So during the recording, the high cpu/memory usage consumption will be consumed on the client side not on the server-side. But the problem is incompatibilities between different platform like Mac, IOS, Linux, and Windows. But we've successfully implement a recording session by integrating the RecordRTC to the jitsi-meet frontend.
I have one jitsi server - upto date but with an older version of web site (circa 6 months old) instance on ESXi 6.7 with Jibri 8.0-61-g99288dc-1/ffmpeg 3.4.8/Chrome 86.0.4240.193 + same ChromeDriver (4xcpu & 8GB RAM, Ubuntu 18.04) and it can sit and record & stream literally for 24++ hours (repeated this process multiple times with 50+ participants - mix of linux/win VMs, setup with OBS/Virtual Mic & iOS/Android devices). Memory usage stays at around 3GB for the duration.
If you upgrade to latest versions of all (latest jibri / ffmpeg 4.3.2 / Chrome 90 etc) - lucky it stays alive for 30-45 minutes.
Copy the Jibri VM (no upgrades) to another ESXi 6.7 server & change hosts file and jibri.conf to point to a default install jitsi-meet brand new & all updates - lasts 20 minutes to 2 hours maximum.
If you upgrade to latest versions of all (latest jibri / ffmpeg 4.3.2 / Chrome 90 etc) - lucky it stays alive for 5 minutes.
Of note in the log files (ffmpeg.0.txt) memory creep occurs whenever speed is less than 1 -
INFO: [119] ffmpeg.call() frame=45192 fps= 29 q=23.0 size= 273759kB time=00:25:06.37 bitrate=1488.8kbits/s speed=0.975x
INFO: [119] ffmpeg.call() frame=45207 fps= 29 q=22.0 size= 273830kB time=00:25:06.86 bitrate=1488.7kbits/s speed=0.975x
The more regular you get "ALSA buffer xrun."
INFO: [267] ffmpeg.call() [alsa @ 0x56324464b2e0] ALSA buffer xrun.
More memory & closer to crashing.
I built the VMs as a test bed to reproduce live reported issues & can recreate crashes at will, dependent on resources allocated vs number of participants. Using LastN makes no difference (in fact I'd go as far as to say JIBRI is more stable if it is absent - but that's an observation rather than "scientifically tested".
Anyone needs some VM clients pointed to do a test let me know. Our goal was to stream to multiple sources for some 24 hour events, hence the testing.
Same problem when we have >40 users. jibri host ate 80% of 8 core 2.6 CPU and 100% of 20Gb RAM. Ubuntu 20.04.2 LTS Then jibri1 fails, conference switched to jibri2 host and same situation there. Repeated 3 times for each hosts (6 errors in half an hour) journalctl shows: jibri Out of memory: Killed process ffmpeg
Same problem when we have >40 users. jibri host ate 80% of 8 core 2.6 CPU and 100% of 20Gb RAM. Ubuntu 20.04.2 LTS Then jibri1 fails, conference switched to jibri2 host and same situation there. Repeated 3 times for each hosts (6 errors in half an hour) journalctl shows: jibri Out of memory: Killed process ffmpeg
I have a solution for this (just writing it up actually to publish from all our testing). I can point a JIBRI to your server to prove if you want me to. Bottom line is it is a timing issue between ALSA & FFMPEG that VMs (on esxi/aws/xen etc) cause. Have just run 2 x JIBRI live streams for 30 hours using less than 2GB RAM with no memory leaks.
Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.
ubuntu@jibri-xenial:~$ mv -v /usr/bin/ffmpeg /usr/bin/ffmpeg-original
ubuntu@jibri-xenial:~$ vim /usr/bin/ffmpeg
Then add this script on /usr/bin/ffmpeg
#!/bin/bash
ARGS=$@
ARGS=$(echo $ARGS | sed -e "s/-thread_queue_size 4096/-thread_queue_size 2048/g")
ARGS="$ARGS"
echo -n $ARGS >> /tmp/ffmpeg.log
exec /usr/bin/ffmpeg-original $ARGS
Now save the /usr/bin/ffmpeg then make it executable:
ubuntu@jibri-xenial:~$ chmod +x /usr/bin/ffmpeg
*Note: Make sure nobody is using the jibri instance or ffmpeg, and also there's no need to restart the jibri service when applying this action**
Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.
Did not help. We created test conference for 18 users and got 80% CPU and 80% RAM. As far as I could see about 6 cores were used by chrome ver 90. And ffmpeg had 150% of CPU and more then 16 Gb RAM.
Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.
Did not help. We created test conference for 18 users and got 80% CPU and 80% RAM. As far as I could see about 6 cores were used by chrome ver 90. And ffmpeg had 150% of CPU and more then 16 Gb RAM.
Was this recording or streaming ? The offer stands - if you are behind a firewall I can do a static IP & you only need to open 5222/tcp for it to work.
Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.
Did not help. We created test conference for 18 users and got 80% CPU and 80% RAM. As far as I could see about 6 cores were used by chrome ver 90. And ffmpeg had 150% of CPU and more then 16 Gb RAM.
Was this recording or streaming ? The offer stands - if you are behind a firewall I can do a static IP & you only need to open 5222/tcp for it to work.
Recording. KVM hypervisor. Never had this troubles with HyperV Ubuntu 16 jibri by the way.
Added to jibri2 cores up to 16. Two users and ffmpeg killed after 3 minutes (without piping ffmpeg trick).
Have experienced this on ESXi 6.5 & 6.7, XEN, Docker and HyperV - albeit UB18 or 20 on most - tried UB16 on ESXi and same problem. Is this also latest Jibri with FFMPEG 1280 x 720 or 1920 x 1080 - it's worse with the latter in my experience. It is not the number of CPUs or RAM. Pure timing for the ALSA sound & FFMPEG.
Thanks a lot @sblotus you helped me to nail it down.
So I just finished my tests.
Basically, last time, it worked, because I fiddled with ffmpeg scripts, thanks to @emrahcom and I did put something like that:
...
ARGS=`echo $ARGS | \
sed "s#2976k#1900k#g"`
...
And the previous version, it was also sending 720p instead of 1080p.
But basically the "memory leak" comes from our ffmpeg not being fast enough.. So the only thing you have to watch is:
tail -f ./config/jibri/logs/ffmpeg.0.txt
If it looks like this:
2021-05-27 15:49:42.844 INFO: [58] ffmpeg.log() frame= 867 fps= 30 q=23.0 size= 1732kB time=00:00:28.86 bitrate= 491.4kbits/s speed=1.01x
2021-05-27 15:49:43.844 INFO: [58] ffmpeg.log() frame= 884 fps= 30 q=24.0 size= 1911kB time=00:00:29.44 bitrate= 531.6kbits/s speed=1.01x
2021-05-27 15:49:43.845 INFO: [58] ffmpeg.log() frame= 900 fps= 30 q=24.0 size= 2113kB time=00:00:29.97 bitrate= 577.4kbits/s speed=1.01x
2021-05-27 15:49:44.845 INFO: [58] ffmpeg.log() frame= 913 fps= 30 q=21.0 size= 2252kB time=00:00:30.40 bitrate= 606.9kbits/s speed= 1x
2021-05-27 15:49:44.845 INFO: [58] ffmpeg.log() frame= 927 fps= 30 q=24.0 size= 2380kB time=00:00:30.86 bitrate= 631.7kbits/s speed= 1x
2021-05-27 15:49:45.846 INFO: [58] ffmpeg.log() frame= 943 fps= 30 q=23.0 size= 2556kB time=00:00:31.40 bitrate= 666.9kbits/s speed= 1x
2021-05-27 15:49:45.846 INFO: [58] ffmpeg.log() frame= 957 fps= 30 q=25.0 size= 2741kB time=00:00:31.86 bitrate= 704.6kbits/s speed= 1x
2021-05-27 15:49:46.846 INFO: [58] ffmpeg.log() frame= 972 fps= 30 q=25.0 size= 2912kB time=00:00:32.36 bitrate= 737.1kbits/s speed= 1x
2021-05-27 15:49:46.847 INFO: [58] ffmpeg.log() frame= 986 fps= 30 q=21.0 size= 3066kB time=00:00:32.83 bitrate= 765.0kbits/s speed=0.998x
2021-05-27 15:49:47.847 INFO: [58] ffmpeg.log() frame= 999 fps= 30 q=25.0 size= 3209kB time=00:00:33.27 bitrate= 790.1kbits/s speed=0.996x
2021-05-27 15:49:47.848 INFO: [58] ffmpeg.log() frame= 1014 fps= 30 q=24.0 size= 3437kB time=00:00:33.76 bitrate= 833.7kbits/s speed=0.995x
2021-05-27 15:49:48.848 INFO: [58] ffmpeg.log() frame= 1028 fps= 30 q=25.0 size= 3651kB time=00:00:34.23 bitrate= 873.7kbits/s speed=0.993x
2021-05-27 15:49:48.849 INFO: [58] ffmpeg.log() frame= 1041 fps= 30 q=25.0 size= 3824kB time=00:00:34.66 bitrate= 903.5kbits/s speed=0.991x
2021-05-27 15:49:49.849 INFO: [58] ffmpeg.log() frame= 1055 fps= 30 q=25.0 size= 4001kB time=00:00:35.13 bitrate= 932.9kbits/s speed=0.989x
2021-05-27 15:49:49.849 INFO: [58] ffmpeg.log() frame= 1069 fps= 30 q=25.0 size= 4168kB time=00:00:35.60 bitrate= 959.0kbits/s speed=0.988x
It means it is not fast enough, and the queue is accumulating in memory. Then it crashes.
So either you fiddle with the ffmpeg args (definition, preset..) or you need better hardware. I also checked that with a hetzner VM non dedicated cpu, it doesn't work, whereas with a dedicated modern cpu, it does.
I think we can close this issue, as we just need to have a faster cpu to be able to encode faster, or less quality for ffmpeg to be able to be up to date.
Thanks again @sblotus and @emrahcom for your help :)
not sure my last crash log:
2021-05-27 13:32:29.683 INFO: [74] ffmpeg.log() [alsa @ 0x55f9b57a94c0] ALSA buffer xrun. 2021-05-27 13:32:29.733 INFO: [74] ffmpeg.log() frame= 1935 fps=8.2 q=26.0 size= 16896kB time=00:01:04.46 bitrate=2147.0kbits/s speed=0.272x
2021-05-27 13:32:29.734 INFO: [74] ffmpeg.log() frame= 1935 fps=8.1 q=26.0 size= 16896kB time=00:01:04.46 bitrate=2147.0kbits/s speed=0.271x EOF
Try this piping ffmpeg trick to override the default number of thread without changing the original code, it works on any version of jibri.
ubuntu@jibri-xenial:~$ mv -v /usr/bin/ffmpeg /usr/bin/ffmpeg-original ubuntu@jibri-xenial:~$ vim /usr/bin/ffmpeg
Then add this script on /usr/bin/ffmpeg
#!/bin/bash ARGS=$@ ARGS=$(echo $ARGS | sed -e "s/-thread_queue_size 4096/-thread_queue_size 2048/g") ARGS="$ARGS" echo -n $ARGS >> /tmp/ffmpeg.log exec /usr/bin/ffmpeg-original $ARGS
Now save the /usr/bin/ffmpeg then make it executable:
ubuntu@jibri-xenial:~$ chmod +x /usr/bin/ffmpeg
Note: Make sure nobody is using the jibri instance or ffmpeg, and also there's no need to restart the jibri service when applying this action*
This does not work for me - lasts 3 minutes and 38 seconds (ran three times) & consistently fails.
2021-05-27 18:34:14.556 INFO: [59] ffmpeg.log() ffmpeg version 4.3.2-0york0~18.04 Copyright (c) 2000-2021 the FFmpeg developers
2021-05-27 18:34:14.557 INFO: [59] ffmpeg.log() built with gcc 7 (Ubuntu 7.5.0-3ubuntu1~18.04)
2021-05-27 18:34:14.671 INFO: [59] ffmpeg.log() configuration: --prefix=/usr --extra-version='0york0~18.04' --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librabbitmq --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libsrt --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libzimg --enable-pocketsphinx --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared
2021-05-27 18:34:14.671 INFO: [59] ffmpeg.log() libavutil 56. 51.100 / 56. 51.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavcodec 58. 91.100 / 58. 91.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavformat 58. 45.100 / 58. 45.100
2021-05-27 18:34:14.672 INFO: [59] ffmpeg.log() libavdevice 58. 10.100 / 58. 10.100
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libavfilter 7. 85.100 / 7. 85.100
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libavresample 4. 0. 0 / 4. 0. 0
2021-05-27 18:34:14.673 INFO: [59] ffmpeg.log() libswscale 5. 7.100 / 5. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() libswresample 3. 7.100 / 3. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() libpostproc 55. 7.100 / 55. 7.100
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() [x11grab @ 0x55b49c46e980] Stream #0: not enough frames to estimate rate; consider increasing probesize
2021-05-27 18:34:14.674 INFO: [59] ffmpeg.log() Input #0, x11grab, from ':0.0+0,0':
2021-05-27 18:34:14.675 INFO: [59] ffmpeg.log() Duration: N/A, start: 1622136854.649970, bitrate: 1990656 kb/s
2021-05-27 18:34:15.042 INFO: [59] ffmpeg.log() Stream #0:0: Video: rawvideo (BGR[0] / 0x524742), bgr0, 1920x1080, 1990656 kb/s, 30 fps, 1000k tbr, 1000k tbn, 1000k tbc
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Guessed Channel Layout for Input Stream #1.0 : stereo
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Input #1, alsa, from 'plug:bsnoop':
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Duration: N/A, start: 1622136854.291646, bitrate: 1536 kb/s
2021-05-27 18:34:15.043 INFO: [59] ffmpeg.log() Stream #1:0: Audio: pcm_s16le, 48000 Hz, stereo, s16, 1536 kb/s
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream mapping:
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream #0:0 -> #0:0 (rawvideo (native) -> h264 (libx264))
2021-05-27 18:34:15.044 INFO: [59] ffmpeg.log() Stream #1:0 -> #0:1 (pcm_s16le (native) -> aac (native))
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() Press [q] to stop, [?] for help
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] using cpu capabilities: MMX2 SSE2Fast SSSE3 SSE4.2 AVX
2021-05-27 18:34:15.045 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] profile High, level 4.0
2021-05-27 18:34:16.046 INFO: [59] ffmpeg.log() [libx264 @ 0x55b49c4a3ec0] 264 - core 155 r2917 0a84d98 - H.264/MPEG-4 AVC codec - Copyleft 2003-2018 - http://www.videolan.org/x264.html - options: cabac=1 ref=1 deblock=1:0:0 analyse=0x3:0x113 me=hex subme=2 psy=1 psy_rd=1.00:0.00 mixed_ref=0 me_range=16 chroma_me=1 trellis=0 8x8dct=1 cqm=0 deadzone=21,11 fast_pskip=1 chroma_qp_offset=0 threads=4 lookahead_threads=4 sliced_threads=1 slices=4 nr=0 decimate=1 interlaced=0 bluray_compat=0 constrained_intra=0 bframes=0 weightp=1 keyint=60 keyint_min=6 scenecut=40 intra_refresh=0 rc_lookahead=0 rc=crf mbtree=0 crf=25.0 qcomp=0.60 qpmin=0 qpmax=69 qpstep=4 vbv_maxrate=2976 vbv_bufsize=5952 crf_max=0.0 nal_hrd=none filler=0 ip_ratio=1.40 aq=1:1.00
2021-05-27 18:34:16.046 INFO: [59] ffmpeg.log() Output #0, flv, to 'rtmp://a.rtmp.youtube.com/live2/gcqg-drrc-mkc1-wh01-29yd':
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() encoder : Lavf58.45.100
2021-05-27 18:34:16.047 INFO: [59] ffmpeg.log() Stream #0:0: Video: h264 (libx264) ([7][0][0][0] / 0x0007), yuv420p(progressive), 1920x1080, q=-1--1, 30 fps, 1k tbn, 30 tbc
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() encoder : Lavc58.91.100 libx264
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() Side data:
2021-05-27 18:34:16.048 INFO: [59] ffmpeg.log() cpb: bitrate max/min/avg: 2976000/0/0 buffer size: 5952000 vbv_delay: N/A
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() Stream #0:1: Audio: aac (LC) ([10][0][0][0] / 0x000A), 44100 Hz, stereo, fltp, 128 kb/s
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() Metadata:
2021-05-27 18:34:16.049 INFO: [59] ffmpeg.log() encoder : Lavc58.91.100 aac
2021-05-27 18:34:16.050 INFO: [59] ffmpeg.log() frame= 11 fps=0.0 q=21.0 size= 7kB time=00:00:00.34 bitrate= 165.4kbits/s speed=0.658x
2021-05-27 18:34:16.050 INFO: [59] ffmpeg.log() frame= 23 fps= 22 q=21.0 size= 9kB time=00:00:00.74 bitrate= 103.7kbits/s speed=0.699x
2021-05-27 18:34:17.051 INFO: [59] ffmpeg.log() frame= 36 fps= 23 q=21.0 size= 12kB time=00:00:01.16 bitrate= 83.8kbits/s speed=0.732x
2021-05-27 18:34:18.052 INFO: [59] ffmpeg.log() frame= 51 fps= 24 q=21.0 size= 15kB time=00:00:01.67 bitrate= 73.0kbits/s speed=0.794x
2021-05-27 18:34:18.052 INFO: [59] ffmpeg.log() frame= 68 fps= 26 q=21.0 size= 23kB time=00:00:02.23 bitrate= 83.8kbits/s speed=0.85x
2021-05-27 18:34:19.053 INFO: [59] ffmpeg.log() frame= 82 fps= 26 q=21.0 size= 25kB time=00:00:02.70 bitrate= 76.9kbits/s speed=0.858x
2021-05-27 18:34:19.054 INFO: [59] ffmpeg.log() frame= 96 fps= 26 q=21.0 size= 28kB time=00:00:03.16 bitrate= 72.1kbits/s speed=0.859x
2021-05-27 18:34:20.057 INFO: [59] ffmpeg.log() frame= 113 fps= 27 q=21.0 size= 31kB time=00:00:03.73 bitrate= 67.8kbits/s speed=0.889x
2021-05-27 18:34:20.058 INFO: [59] ffmpeg.log() frame= 128 fps= 27 q=21.0 size= 38kB time=00:00:04.24 bitrate= 74.0kbits/s speed=0.903x
2021-05-27 18:34:21.058 INFO: [59] ffmpeg.log() frame= 143 fps= 27 q=21.0 size= 41kB time=00:00:04.73 bitrate= 71.0kbits/s speed=0.91x
2021-05-27 18:34:21.059 INFO: [59] ffmpeg.log() frame= 157 fps= 28 q=21.0 size= 44kB time=00:00:05.20 bitrate= 68.7kbits/s speed=0.911x
2021-05-27 18:34:22.059 INFO: [59] ffmpeg.log() frame= 177 fps= 28 q=21.0 size= 47kB time=00:00:05.87 bitrate= 65.8kbits/s speed=0.941x
2021-05-27 18:34:22.059 INFO: [59] ffmpeg.log() frame= 191 fps= 28 q=21.0 size= 54kB time=00:00:06.33 bitrate= 70.4kbits/s speed=0.934x
2021-05-27 18:34:23.060 INFO: [59] ffmpeg.log() frame= 207 fps= 28 q=21.0 size= 57kB time=00:00:06.87 bitrate= 68.3kbits/s speed=0.942x
2021-05-27 18:34:23.060 INFO: [59] ffmpeg.log() frame= 222 fps= 28 q=26.0 size= 164kB time=00:00:07.36 bitrate= 182.7kbits/s speed=0.939x
2021-05-27 18:34:24.060 INFO: [59] ffmpeg.log() frame= 230 fps= 27 q=21.0 size= 302kB time=00:00:07.64 bitrate= 324.1kbits/s speed=0.912x
2021-05-27 18:34:24.061 INFO: [59] ffmpeg.log() frame= 240 fps= 27 q=23.0 size= 465kB time=00:00:07.96 bitrate= 478.3kbits/s speed=0.894x
2021-05-27 18:34:25.061 INFO: [59] ffmpeg.log() frame= 250 fps= 26 q=23.0 size= 613kB time=00:00:08.31 bitrate= 604.1kbits/s speed=0.879x
2021-05-27 18:34:25.062 INFO: [59] ffmpeg.log() frame= 260 fps= 26 q=24.0 size= 749kB time=00:00:08.63 bitrate= 710.5kbits/s speed=0.867x
2021-05-27 18:34:26.066 INFO: [59] ffmpeg.log() frame= 271 fps= 26 q=23.0 size= 901kB time=00:00:09.01 bitrate= 818.8kbits/s speed=0.858x
2021-05-27 18:34:26.066 INFO: [59] ffmpeg.log() frame= 282 fps= 26 q=21.0 size= 1070kB time=00:00:09.36 bitrate= 936.0kbits/s speed=0.851x
2021-05-27 18:34:27.067 INFO: [59] ffmpeg.log() frame= 291 fps= 25 q=26.0 size= 1206kB time=00:00:09.66 bitrate=1021.7kbits/s speed=0.839x
I have installed Ubuntu 18.04, latest JIBRI/ffmpeg chrome 90 etc on an old Gen 4 Intel i3 with 4GB RAM & it just works, 24x7 - sample log files - 4 cpus at 50 to 60% constantly less than 1GB RAM used
2021-05-27 18:41:35.161 INFO: [55] ffmpeg.log() frame=732090 fps= 30 q=24.0 size= 7414826kB time=06:46:42.98 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:35.161 INFO: [55] ffmpeg.log() frame=732106 fps= 30 q=21.0 size= 7414907kB time=06:46:43.50 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:36.161 INFO: [55] ffmpeg.log() frame=732121 fps= 30 q=20.0 size= 7415088kB time=06:46:44.01 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:36.161 INFO: [55] ffmpeg.log() frame=732136 fps= 30 q=24.0 size= 7415224kB time=06:46:44.52 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:37.161 INFO: [55] ffmpeg.log() frame=732151 fps= 30 q=25.0 size= 7415569kB time=06:46:45.00 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:37.161 INFO: [55] ffmpeg.log() frame=732166 fps= 30 q=24.0 size= 7415762kB time=06:46:45.54 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:38.162 INFO: [55] ffmpeg.log() frame=732182 fps= 30 q=23.0 size= 7415890kB time=06:46:46.03 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:38.162 INFO: [55] ffmpeg.log() frame=732197 fps= 30 q=23.0 size= 7416013kB time=06:46:46.54 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:39.162 INFO: [55] ffmpeg.log() frame=732212 fps= 30 q=23.0 size= 7416253kB time=06:46:47.07 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:39.162 INFO: [55] ffmpeg.log() frame=732228 fps= 30 q=22.0 size= 7416332kB time=06:46:47.56 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:40.162 INFO: [55] ffmpeg.log() frame=732243 fps= 30 q=23.0 size= 7416429kB time=06:46:48.07 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:40.162 INFO: [55] ffmpeg.log() frame=732258 fps= 30 q=19.0 size= 7416519kB time=06:46:48.56 bitrate=2489.1kbits/s speed= 1x
2021-05-27 18:41:41.162 INFO: [55] ffmpeg.log() frame=732274 fps= 30 q=21.0 size= 7416780kB time=06:46:49.10 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:41.162 INFO: [55] ffmpeg.log() frame=732288 fps= 30 q=25.0 size= 7417066kB time=06:46:49.60 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:42.163 INFO: [55] ffmpeg.log() frame=732304 fps= 30 q=23.0 size= 7417328kB time=06:46:50.10 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:42.163 INFO: [55] ffmpeg.log() frame=732319 fps= 30 q=23.0 size= 7417444kB time=06:46:50.60 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:43.163 INFO: [55] ffmpeg.log() frame=732335 fps= 30 q=23.0 size= 7417555kB time=06:46:51.13 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:43.163 INFO: [55] ffmpeg.log() frame=732350 fps= 30 q=24.0 size= 7417786kB time=06:46:51.67 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:44.163 INFO: [55] ffmpeg.log() frame=732366 fps= 30 q=21.0 size= 7417865kB time=06:46:52.16 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:44.163 INFO: [55] ffmpeg.log() frame=732381 fps= 30 q=23.0 size= 7417936kB time=06:46:52.67 bitrate=2489.2kbits/s speed= 1x
2021-05-27 18:41:45.163 INFO: [55] ffmpeg.log() frame=732396 fps= 30 q=22.0 size= 7418057kB time=06:46:53.18 bitrate=2489.2kbits/s speed= 1x
In both cases your ffmpeg speed is < 1x so then queuing in mem, looks like a leak, but it is just normal, you need to tune your VM and ffmpeg to have a speed of 1x :)
@pierreozoux any particular settings for VM ? We have tried many, some work for an hour or two but none so far that provide 24x7 reliability.
@pierreozoux in hetzner, using a VM with dedicated CPU worked, whereas non dedicated didn't. (I suspect, hetzner does throttle, and as ffmpeg is probably hammering the CPU, unfortunately, you can't montior as you could on AWS your credits, so it is hard to say). But then, play with bitrate and resolution, I'm sure any VM can stream at 1x, but yeah, it depends on the size of the pipe it has to ingest :) .
On GCP/GKE, we've had much better luck with AMD Epyc machines (N2D) than standard ones (N1 - Intel up to Skylake). We haven't done extensive testing, but with 2 cores and 4 GB RAM, N2D nodes could run Jibri for over half an hour while N1 nodes with the same or even better specs overloaded and crashed within minutes. If your CPU is not fast enough, frames will start buffering in RAM - it's as simple as that, as far I understood it.
Shared vCPUs are a no-go for any serious workload on any provider, this should be obvious. Their performance is extremely inconsistent.
@saghul I think we can close this discussion and continue in the forum if needed.
@pierreozoux @saghul further testing reveals -
1) Use bare-metal i3 or i5 4th gen - Ubuntu 18.04.05 + updates + latest FFMPEG & JIBRI & CHROME 91 and as stable as a rock for 36 hours streaming or recording (1.2GB - 1.4 GB RAM in use). 20 Users in conference. 2) ESXi 6.7 VM - same as bare-metal build but 4 vCPU 8 MB RAM - dies in under 5 minutes 3) Windows 10 Hyper-V - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 4 hours (had 3 JIBRI VMs running same machine & simultaneous streaming) 4) Windows 2019 Hyper-V "headless" - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 5-10 minutes 5) FreeNAS/TrueNAS - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 4 hours 6) AWS - same as bare-metal build but 4 vCPU 8 MB RAM - lasts about 40 minutes
All tests repeated / repeatable.
So other than bare-metal i3 4th Gen this is quite an issue.
Same.
You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.
You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.
No, it was simple test with 1 conf and 1 VM. New jitsi instance just for me, I'm the only participant. VMs on the hypervisors with all 24-48 cores idle.
You certainly cannot follow the principal of fire up another JIBRI VM when the others are busy - unless its a Wake On LAN command to fire up another i3 old desktop pulled in for use ..... Bare metal is the only 100% stable option.
No, it was simple test with 1 conf and 1 VM. New jitsi instance just for me, I'm the only participant. VMs on the hypervisors with all 24-48 cores idle.
Sorry was not referring to you directly but metaphorically "no one" can follow the principal as it's just too unreliable.
Do you know that the default resolution changed? https://community.jitsi.org/t/jibri-resolution-now-defaults-to-1080p/95478
So if your box can't stream 1080p, try 720p. I just did the test with a box that can't stream 1080p, it can 720p :)
Do you know that the default resolution changed? https://community.jitsi.org/t/jibri-resolution-now-defaults-to-1080p/95478
So if your box can't stream 1080p, try 720p. I just did the test with a box that can't stream 1080p, it can 720p :)
Yeah fully aware. It's not about "a box" it's about VMs in general not being reliable .... full stop on ESXi/AWS/XEN/Hyper-V etc.
My base piece of bare-metal for testing is a "lowly" Intel(R) Core(TM) i3-4130 CPU @ 3.40GHz with 4GB RAM. This can handle JIBRI all day long, 24x7 - longest stream is 48 hours (max 1.4GB RAM used), including to an nginx RTMP server running on the same box. Full resolution 1920x1080.
So if you have a dual CPU Xeon (32 cores, 192GB RAM) - you'd expect at a minimum 3 or 4 JIBRI to run quite happily (in theory a lot more). So, it appears that "recommended" are 4 cores / 8 GB RAM on Hypervisors per VM. What @trashographer and I are saying is that with just one JIBRI with pretty much zero load it just dies very quickly. Even if you bump to 8 cores & guarantee 50% CPU or greater it dies still fairly quickly.
If bare-metal works on such lowly specifications then why is it we should "mess around" with lowering resolution as I have done that and can still get failures within one hour.
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
wow, thanks will try tomorrow
@DonxDaniyar
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
Could you tell me what tag to use? for jibri in docker
@DonxDaniyar
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
Now it still works for you? Could you share your Docker configuration in a repository, please, it would be helpful. Thanks.
@DonxDaniyar
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
Now it still works for you? Could you share your Docker configuration in a repository, please, it would be helpful. Thanks.
I use this image
jitsi/jibri:stable-5076
@DonxDaniyar
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
Could you tell me what tag to use? for jibri in docker
jitsi/jibri:stable-5076
The latest version now defaults to 720p, I'd encourage you to try that. Soon enough the Chrome version on that old image will be too old to run.
@DonxDaniyar
In my case i install jibri docker instances. I have same issue. I solve it by installing older version of docker instance. Actually 8 month ago version
Could you tell me what tag to use? for jibri in docker
jitsi/jibri:stable-5076
@McL1V3 @carellano @DonxDaniyar
Kindly can you confirm if this worked for you? Thanks.
Description
On jitsi when I start a recording or a streaming session, in less than a minute the recording/stream will stop and my whole server become slow and unresponsive.
With top, I could pin the culprit: ffmpeg. It eats away all the memory very quickly. In less than a minute my 8GB are filled.
You can find attached the log of jibri when I tried a streaming session. Nothing stands out to me. I stopped the streaming after 15 seconds and ffmpeg was already at 40% memory.
Also if I stop completely prosody, jicofo, jvb and jibri and if I log as a jibri user and starts ffmpeg by myself, using the command I found in log.0.txt, I get the same issue, the CPU shoot to 150% and the memory keeps growing. I have to kill ffmpeg before it saturates the memory.
ffmpeg -y -v info -f x11grab -draw_mouse 0 -r 30 -s 1280x720 -thread_queue_size 4096 -i :0.0+0,0 -f alsa -thread_queue_size 4096 -i plug:bsnoop -acodec aac -strict -2 -ar 44100 -c:v libx264 -preset veryfast -maxrate 2976k -bufsize 5952k -pix_fmt yuv420p -r 30 -crf 25 -g 60 -tune zerolatency -f flv rtmp://a.rtmp.youtube.com/live2/aaa
If I remove every parameters related to sound in this ffmpeg command line, so removing -f alsa -thread_queue_size 4096 -i plug:cloop -acodec aac, then the memory saturation issue goes away. Memory usage is stable. So It clearly seems to be related to the sound. How can I debug this kind of issue ?
Possible Solution
Steps to reproduce
Environment details
Ubuntu 16, followed the instruction on github
browser.0.txt log.0.txt ffmpeg.0.txt asoundrc.txt