Closed whorfin closed 2 months ago
Have you read the manual regarding the demuxer cache? How big is the "leak"? It's likely to be just that.
Or also the libass glyph cache.
Saying "memory usage go up" is not how to diagnose memory leaks, use actual tooling like valgrind.
I have read the manual. Running the list-options
indicates the default for demuxer-max-bytes
to be 150MB; in this particular case it uses closer to 156KB (note --cache-secs=4
)
The leak seems to be unbounded, with mpv
increasing memory usage by tens of KB per second, over the course of hours I can see total memory usage of mpv rising from 1.3 to 5% of total memory on a 6G machine.
When left running for weeks, I have seen it be killed, which is why I started digging in to this.
I believe the "Res Data" of nmon
is equivalent to M_DRS in htop
, just reported with more digits when the columns are wide enough, so one can easily spot leaks.
If I run mpv
with normal terminal output enabled but with the cache settings noted, I see what I expect:
Cache: 4.0s/155KB
(with a slight twitch to 156KB and back)
And Resident Data is consistent with that upon first launch, until eventually it begins growing.
While the used memory continues to grow, the reported cache usage stays at 155KB
Running with --demuxer-max-bytes=160KiB
does not change this - Res Data goes well beyond that and continues to climb. So in that regard at least, mpv
doesn't seem aware of the leak nor is it (apparently) the result of a secretly filling cache, unless limits are being ignored and reporting is wrong.
Here you can see when mpv
has first been launched https://www.dropbox.com/scl/fi/mhb1t66tqvbbdb5g71xi1/mpv_juststarted.mov?rlkey=355vlra06s477qgefheik4q9s&st=bfu126bb&dl=0
And here you can observe it a few minutes later, with memory climbing by 10s of KB per second https://www.dropbox.com/scl/fi/e14wjrf8c28mn205f0u5f/mpv_leaking.mov?rlkey=d5ppy652qhaf1nh5n3heprbfv&st=g7ns3cw4&dl=0
To reiterate - I'm not a newb surpised that memory usage "go up" a bit when mpv
is started. I am reporting what appears to be a slow and continuous memory leak while playing a particular internet radio stream. There is full repro information.
...and in case it were not also clear - I 🖤 mpv
. It is a wonderful media player, and IPC control is a delight which I heavily utilize and greatly appreciate.
I want to be very respectful of maintainer's time.
With my latest set of cmdline arguments which turn off as much configuration as possible (I still need mpv-mpris), I just observed mpv
drop M_DRS
after continuing to grow. It has dropped back to a reasonable number less than 256MiB.
The ever-growing behavior has stopped.
I will keep an eye on things, but will happily close this for now.
I believe the "Res Data" of nmon is equivalent to M_DRS in htop, just reported with more digits when the columns are wide enough, so one can easily spot leaks.
No you can't. A leak is memory that is allocated but then insufficiently tracked leading to it never being freed. You are just looking at memory usage going up, which may either be a leak or just normal program behavior. To distinguish the two and file actual useful bug reports, you need to use a profiler like valgrind or even just a heap analyzer like heaptrack.
mpv slowly leaking memory when listening to network stream
I've run it for few hours and I don't see anything that could be considered as a leak.
./mpv --no-config https://api.somafm.com/u80s130.pls
as we can see memory usage goes up as the cache is filled up, after that we no longer increase memory usage.
If we zoom in on the "flat part", we can see that there is slight upward trend, look at the scale though.
If we filter out short-lived allocations, like constant packets reallocs (unfortunately #12556 is still not merged). We find 4 potential groups of allocations. They are all connected to msg.c
and are in our message log ringbuffer, which is limited in capacity. Since there is not much logs added, only song names changes, we get 58 * 4 allocations. We are still not at the log buffer limit. Everything works as expected.
Example of one allocation backtrace:
#00 [mpv] _start
#01 [libc.so.6] __libc_start_main
#02 [libc.so.6] 758d218d3e07
#03 [mpv] mpv_main [main.c:443]
#04 [mpv] mp_play_files [loadfile.c:2029]
#05 [mpv] play_current_file [loadfile.c:1847]
#06 [mpv] run_playloop [playloop.c:1224]
#07 [mpv] update_demuxer_properties [loadfile.c:390]
#08 [mpv] mp_msg [msg.c:1104]
#09 [mpv] mp_msg_va [msg.c:546]
#10 [mpv] mp_msg_va.part.0 [msg.c:576]
#11 [mpv] write_term_msg [msg.c:521]
#12 [mpv] write_msg_to_buffers [msg.c:469]
#13 [mpv] bstrdup0 [bstr.h:41]
#14 [mpv] ta_xstrndup [ta_utils.c:292]
#15 [mpv] ta_strndup [ta_utils.c:127]
#16 [mpv] strndup_append_at [ta_utils.c:93]
#17 [mpv] ta_alloc_size [ta.c:139]
That being said, I don't see anything leaking on my machine, using latest mpv. Of course depending on your system configuration you may see a leak in other parts of code, I've used pipewire audio output. You may be seeing leak in external library too. For that, we would need more information when it is happening and how to reproduce it and you seem to no longer reproduce the issue currently.
(similar behavior observed with the current-on-Ubuntu package with 0.34.1)
IMO it's highly likely you're experiencing a leak in an eternal library, if you have the same "leak" on an ancient version as well as master. You could build mpv with leaksanitizer to see if it reports anything on exit.
mpv Information
Other Information
Reproduction Steps
The below is exactly how I launch mpv in my application (except I use ipc server mode, but the bug shows nonetheless) - I've added
--no-config
to confirm the problem still manifestsmpv is either launched directly or via
flatpak run io.mpv.Mpv
for flatpak versionThen in another terminal one can use
ps aux
while I preferand then hit
t
on an otherwise quiescent system, and then keep an eye onRes Data
RSS
likewise in plainps
Expected Behavior
mpv runs, plays my audio stream, and does not continue to slowly consume more memory
Actual Behavior
Keeping on eye on RSS/Resident Data, we see it slowly climbing. This does not seem to happen until the first "track change" in the stream; as an IceCast/ShoutCast stream, the stream metadata is updated at track changes with
icy-title:
changing per track. Once the leak starts, though,Res Data
just slowly keeps creeping up from that point forwards....looking at the log file, it appears things really starting leaking after the "resize index to 512" message
Log File
output.txt
Sample Files
No response
I carefully read all instruction and confirm that I did the following:
--log-file=output.txt
.