Open botiapa opened 2 years ago
Sorry, I don't have access to my PC for another week or so. But the first thing I would try is a different version of wlroots. So if you have 0.14.1, try the git version.
If that doesn't solve it then I think there might be something wrong with the Wayland protocols I provided. Can't say for sure for now. I'll let you know if I need more info later.
I can confirm this issue as it happens every time I used mpvpaper in background for a while. Steps:
1. Start pc, I use mpvpaper to run a .mp4 file in background
2. After I start the pc, the video will be put in pause state -- echo '{ "command": ["get_property", "pause"] }' | socat - /tmp/mpv-socket
3. Run the pc for a while
4. Observer lag input at some points and constantly high ram usage
5. Check ps/top for high memory usage does not show mpvpaper (sometime does)
6. Multiple force kill in dmesg as pc crashed, mpvpaper is the last entry in dmesg
[25542.067107] Out of memory: Killed process 1417 (mpvpaper) total-vm:1919808kB, anon-rss:19220kB, file-rss:0kB, shmem-rss:9108kB, UID:1000 pgtables:984kB oom_score_adj:0
System:
os: archlinux up-to-date
wm: sway version 1.7-c1725c8 (Apr 9 2022, branch 'community/packages/sway') (from arch official repo)
wlroots: 0.15.1-3 (from arch official repo)
Your issue is a separate issue. I could not find any memory leak that you mentioned with valgrind. Besides "total-vm:1919808kB" (1.9gB) and "anon-rss:19220kB"(19mB) is not out of the ordinary memory usage mpv could use.
I'm not saying mpvpaper is not a fault, but I have some questions:
Thanks for the report.
I don't at pc right now but here're some more info:
Sway starts to kill programs gradually and the last in dmesg is mpvpaper. I don't assume the issue is from mpvpaper but it's likely as it's last killed.
Mp4 file is an 4k video and around 4GB.
How can I debug this issue?
16GB of RAM, I guess that makes 2 of us and should be more than plenty. "always top 3 high ram usage", As I said before mpv / mpvpaper, especially with a 4k video, will use quite a bit of RAM so it's not too surprising it's in your top 3. I'm also sure you are using software(CPU) instead of hardware(GPU) rendering. As that will also use more RAM.
As for debugging, if you have the time I'm curious if just normal mpv has this issue. It might be a upstream issue. If you don't, at least tell me approximately how much RAM does mpvpaper use at the start and after "a while".
The only time I see a major increase in RAM personally is when initially playing a video(around a minute as the buffers fill) or when the next video plays. Although it can also go back down with a new video playing afterward, depending on the video.
Lastly what are your terminal options? Are you utilizing either a "pauselist" or "stoplist" text file located in ~/.config/mpvpaper/
Ram is at 93%
and below is checked info:
This is from free -h
:
free -h
total used free shared buff/cache available
Mem: 15Gi 3.1Gi 585Mi 10Gi 11Gi 1.1Gi
Swap: 31Gi 4.6Gi 27Gi
htop
shows mpvpaer
has VIRT
at 1854M
and RES
at 67892
(67M)
Note that I start mpvpaper
through sway exec like: swaymsg exec 'mpvpaper -v -o <..>'
I installed latest mpv
from arch official repo:
mpv 0.34.1-dirty Copyright © 2000-2021 mpv/MPlayer/mplayer2 projects
built on UNKNOWN
FFmpeg library versions:
libavutil 57.17.100
libavcodec 59.18.100
libavformat 59.16.100
libswscale 6.4.100
libavfilter 8.24.100
libswresample 4.3.100
FFmpeg version: n5.0
In mpv.conf
, I've set these settings in hope that mpv
will use gpu when plays video and awares wayland context:
hwdec=vaapi
vo=gpu
gpu-context=wayland
As I stated, I pause mpvpaer
at start as below (when mpv-socket is avail):
echo 'cycle pause' | socat - /tmp/mpv-socket
I often have ram at >90
with firefox-beta, tmux on alacritty, and some alacritty terminal windows. But as you can see from free
command above, most of them are cached data -- probably firefox just use them for speed up pages.
I can't reliable produce this issue but in high workload it happens sometimes.
Wow, your mpvpaper uses less RAM than me! I guess I was wrong that you were using software rendering.
If you want you can "free up" your memory from cache with this:
sync && sudo sh -c "echo 3 > /proc/sys/vm/drop_caches"
Although it's kind of unnecessary as Linux will free up cache as it needs.
"in high workload it happens sometimes", then maybe it's the program/s producing the high RAM usage?
For example, if I compile let's say firefox and use /tmp
as my build directory my RAM will quickly be eaten up to the point of crashing the system.
If you encounter this problem again and see mpvpaper is using a lot of RAM, something like:
pmap -X 871 > mpvpaper_pmap.txt
where 871 is pidof mpvpaper.
Then upload the text file, that would at least narrow down a few possibilities easily.
Thanks, I'll try to get that log when I see some symptoms
is about to crash.
I've just got a crash, this time swaywm
crashed also:
[ 3565.798390] Out of memory: Killed process 95199 (dbus-daemon) total-vm:8544kB, anon-rss:444kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:56kB oom_score_adj:200
[ 3565.838660] Out of memory: Killed process 779 ((sd-pam)) total-vm:104780kB, anon-rss:52kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:92kB oom_score_adj:100
[ 3568.695566] Out of memory: Killed process 778 (systemd) total-vm:18080kB, anon-rss:1288kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:72kB oom_score_adj:100
[ 3601.516532] Out of memory: Killed process 95445 (rclone) total-vm:60816kB, anon-rss:4280kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:88kB oom_score_adj:200
[ 3602.473405] Out of memory: Killed process 95441 (hstdb) total-vm:13604kB, anon-rss:5464kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:64kB oom_score_adj:200
[ 3607.472980] Out of memory: Killed process 95444 (rclone) total-vm:761828kB, anon-rss:5980kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:144kB oom_score_adj:200
[ 3614.480054] Out of memory: Killed process 95471 ((lipmenud)) total-vm:17792kB, anon-rss:1900kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:76kB oom_score_adj:200
[ 3620.882254] Out of memory: Killed process 95422 ((sd-pam)) total-vm:171628kB, anon-rss:2564kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:104kB oom_score_adj:100
[ 3635.799255] Out of memory: Killed process 95421 (systemd) total-vm:17792kB, anon-rss:1204kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:76kB oom_score_adj:100
[ 3689.324248] Out of memory: Killed process 73395 (firefox-bin) total-vm:3449748kB, anon-rss:89792kB, file-rss:0kB, shmem-rss:5836kB, UID:1000 pgtables:2592kB oom_score_adj:0
Interestingly that crash in event of rclone
's doing backup. And firefox-beta
is last killed. No mpvpaper
in the dmesg.
Before crash input becomes difficulty, high cpu usage, high cpu temperature.
I have mpvpaer
pmap here before the crash a minute: https://0x0.st/oAhf.txt
Yeah... I believe you got the wrong guy here.
I looked at the pmap
and nothing looked out of place.
As I said before, you're using less RAM than even me.
If you use htop
it should be fairly obvious what is using all of your RAM.
Or it could be the fact that you're using /tmp
with rclone
(you know that's your RAM right?)
If not, I wish you luck in finding the source of your memory leak.
No, it's something more.
When ram at >90% and I try to kill mpvpaper
it drop immediately to less than 10%. This is something.
Memory reported from tools seems not correct.
It can be swaywm
tries to buffer something that heat up ram.
I tried playing a raw 1k video that need even higher cpu/ram to decode its video stream/audio. Playing some youtube videos in multiple firefox-beta tabs Do some ssh sessions as usual. Ram barely reach over 22% and not willing to scale up more than usual.
I think this can due to something with mpvpaper
itself or how sway handle drawing hidding mpvpaper
window that buffers something in ram.
I think I probably blame wrong mpvpaper due to this isssue logged here
That's ok, tracking down memory leaks can be hard. I'm glad you found yours.
mpvpaper works for about 30 seconds before crashing with the following error:
Let me know if you need more information regarding the error.