Open sledgehammer999 opened 2 years ago
Maybe it is the same problem which I see on Windows 10 64Bit. There seems to be a problem with memory mapped files and microsofts own anti virus software. Not using memory mapped files or a third party antivirus software fixed it for me. (basically not to use microsofts buildin antivirus). I did test build libtorrent with posix file io (iirc) and it fixed the problem but then no sparce file support so I use a third party antivirus.
Working Set
of qbittorrent and the system RAM usage go constantly up. I assume this is due to the OS caching
nope, OS caching do not increase RAM usage of programs
post missing steps to reproduce.
huge lt::settings::cache_size
is for fast seeding.
microsofts own anti virus software.
I use avast.
post missing steps to reproduce.
huge lt::settings::cache_size is for fast seeding.
In any case, this option doesn't exist in RC_2_0
stopping the torrent will cause libtorrent to close the files (and file maps), which it sounds like will also trigger windows to flush the dirty pages. This seems to be a recurring problem on windows. In the past, forcefully closing files periodically I think has proven the most reliable solution. It would be really nice if there was something like madvise()
that could be invoked periodically instead.
It sounds like a lame limitation of the os cache manager to not flush an open file no matter the size/age of its dirty pages.
In the past, forcefully closing files periodically
According to this you can also call FlushFileBuffers() to force a flush. It probably is less cumbersome to incorporate it in your code than logic of reopening files.
And if libtorrent is doing memory maps then maybe FlushViewOfFile() will help. And the remarks indicate that this is an async call too.
- Data corruption. A power loss and poof go the cached data. Especially when it is gigabytes worth
Did you try to emulate it (e.g. by force terminating qBittorrent process)?
- Data corruption. A power loss and poof go the cached data. Especially when it is gigabytes worth
Did you try to emulate it (e.g. by force terminating qBittorrent process)?
The data resides in OS cache (not under qbt process), you'll need to terminate the OS instead.
Did you try to emulate it (e.g. by force terminating qBittorrent process)?
qbt was killed immediately. But doing a right-click->Properties on the file didn't show the dialog for a few seconds (at least 20secs). I also opened resource monitor during that time. The top disk activity for writing was that file. So I suppose the OS was flushing it after the kill.
I'll try to simulate a power loss with a forced VM poweroff.
It seems like the most appropriate fix is to schedule a call to FlushViewOfFile() periodically
https://github.com/arvidn/libtorrent/blob/362366481ee567c23e29bf0495ee640d3d66af4a/src/mmap.cpp#L232
Don't know if related, does the opened file supposed to be shared with other processes? or does libtorrent opens multiple handles of the same file? I would expected it to be just 0
or FILE_SHARE_READ
at most.
https://docs.microsoft.com/en-us/windows/win32/api/fileapi/nf-fileapi-createfilew#parameters
It seems like the most appropriate fix is to schedule a call to FlushViewOfFile() periodically
I think it would be nice to perform this action also when completing the download of some parts of the file.
actually, every time a file completes, its file handle is closed. I believe this will flush it to disk. This is primarily to make sure the next time it's accessed the file is opened in read-only mode.
actually, every time a file completes, its file handle is closed. I believe this will flush it to disk.
Therefore, the problem mostly affects really large files which require more time to complete and more memory to map.
every time a file completes, its file handle is closed. I believe this will flush it to disk
nope auto fsfile = std::fstream(...); fsfile.write(&buf, buf.size()).flush(); // flush fsfile.close(); // no flush() happen
I'll try to simulate a power loss with a forced VM poweroff.
I tried it. Total data loss.
Another related problem may be that the resume data does not correspond to the actual data on the disk. I.e., some parts may be marked as completed in the resume data, but they may be lost due to incorrect system shutdown. This looks more serious than the opposite problem, when some part of the data is written to a file, but is not marked in the resume data as completed.
But in reality, the problem of incorrect shutdown of the system cannot be reliably solved, besides, we should not consider it as a regular scenario. I think we should focus on the performance issue. Data should be flushed to disk periodically to prevent extreme I/O when the system needs to use the occupied memory for other needs. But the time-based periodicity looks inefficient for me, because at different download speeds it will give different results. Wouldn't it be better to have a periodicity based on the amount of currently downloaded data (since last flushing) so that the size of unflushed data does not exceed some reasonable amount?
From the FlushViewOfFile()
documentation:
Flushing a range of a mapped view initiates writing of dirty pages within that range to the disk. [...] The FlushViewOfFile function does not flush the file metadata, and it does not wait to return until the changes are flushed from the underlying hardware disk cache and physically written to disk.
My reading is that this is similar to an madvise()
. It's just suggesting to the OS that now might be a good time to flush, but flushing is still being done by the kernel, and prioritized against all other disk I/O the kernel is doing.
i.e. There will be no back-pressure from this function if the disk is at capacity. The back-pressure will happen in the page faults allocating new dirty pages by the I/O threads.
actually, every time a file completes, its file handle is closed. I believe this will flush it to disk.
From my experience, even the handle is closed the data might still be floating around in OS cache waiting to be written. To really ensure data is on disk better flush it explicitly (before closing handle): https://docs.microsoft.com/en-us/windows/win32/api/memoryapi/nf-memoryapi-flushviewoffile
To flush all the dirty pages plus the metadata for the file and ensure that they are physically written to disk, call FlushViewOfFile and then call the FlushFileBuffers function.
Wouldn't it be better to have a periodicity based on the amount of currently downloaded data (since last flushing) so that the size of unflushed data does not exceed some reasonable amount?
It would be even better if the watermark for flushing is user tune-able, and maybe have a default auto
value (like the previous disk cache size).
Here's a start of a fix: https://github.com/arvidn/libtorrent/pull/6529
Is this bug not just an instance of the known cache performance problems on older Windows versions?
In any case I strongly object to libtorrent having to work around bad filesystem cache behavior. This is what OSes are for.
Cache management is a terribly complex area, and evolves as storage evolves. libtorrent's codebase is already huge and shouldn't be burdened with workarounds like this.
(IMO, exposing FlushViewOfFile
/ fsync
as a function to applications seems like a nice solution to the problem of inconsistent resume data on power loss, but I think that's a separate issue)
I don't think this is just an issue in old versions of windows. as far as I know, all versions of windows have issues balancing disk cache against, for example, executable TEXT segments that may currently be executing, or a cache of zeroed pages to hand to processes that need more RAM.
basically, windows seems very eager to prioritize disk cache (at least dirty pages) over other uses. To some extent this used to be an issue on linux too, where it could under estimate the time it would take to flush a dirty page to its backing store, and not start flushing them early enough.
Another issue with windows and memory mapped files, specifically, is that it seems to be a bit of an after thought. IIUC, the disk cache on windows is not the same as the page cache, but is actually caching at the file level (not block-device level), which means memory mapped files are also different than pages in the page cache, which I imagine could make it interact poorly with prioritizing page-cache pages against disk cache pages.
Anyway, libtorrent has to take a pragmatic approach. It doesn't matter whose responsibility a problem is, it needs to be solved either way.
@arvidn What about exposing FlushViewOfFile
and friends to the application, instead of libtorrent doing periodic flushing?
I think those functions would also be useful to help consistency of resume data in the power loss case. I'm writing a separate issue about that right now.
FWIW, in researching that topic I found you need FlushViewOfFile
followed by FlushFileBuffers
to force a write to disk, which indicates that you're right that file mappings are their own memory separate from the page cache.
There is this call already, does that do what you need? http://libtorrent.org/reference-Torrent_Handle.html#flush-cache
There is this call already, does that do what you need? http://libtorrent.org/reference-Torrent_Handle.html#flush-cache
It looks like flush_cache
and save_resume_data(flush_disk_cache)
just close open file handles. IIUC this isn't guaranteed to flush any caches, on many OS / filesystem combinations. I think these functions are just leftovers from the explicit disk cache system in 1.2.
I'm considering whether I would want a torrent_handle
function that calls fsync()
/FlushFileBuffers()
on its files. There's much discussion that these don't really flush data, but sqlite relies on them, so they're probably pretty good in practice.
I can see a lot of upsides to fsync()
ing torrent files. It would be nice to synchronize torrent data with resume data to make power failures less catastrophic. It would be nice to limit potential loss of downloaded data, especially on private trackers. But I can also imagine it being a performance nightmare even if used judiciously. I think I'll experiment on my own before filing a feature request about this.
yes, I agree that libtorrent probably doesn't implement the documented behavior currently.
What about continuous increase in RAM usage while seeding torrents in RC2_0? What if I don’t like the kernal allowing one app to use almost all the RAM.
@an0n666 should be fixed by https://github.com/arvidn/libtorrent/pull/6529
What if I don’t like the kernal allowing one app to use almost all the RAM.
That's what ram is used for. You can try to fine-tune the kernel, but you will do a bad job. It knows best what's up.
What if I don’t like the kernal allowing one app to use almost all the RAM.
Technically, it's not the program using the RAM, it's the kernel using it as cache. There doesn't sound like there's compelling evidence (yet) that windows is bad at quickly evicting these pages as soon as it needs more memory. As long as the disk cache using most of RAM isn't causing any problems, like system performance degradation, I think it's hard to argue against it.
I don't really have any first hand experience with this (I run windows in a VM, so it's always slow for me), so I rely on reports. @an0n666 please share if you experience issues with the disk cache using too much memory.
Also, the main mechanism to address this issue as encountered in libtorrent 1.2.x was to periodically close the least recently used file, triggering eviction of its cache. This mechanism was also just recently re-enabled in libtorrent-2.x on windows.
You might want to consider also lowering the pagefile/cache priority for the application in Windows - to 1, so to try and free unused parts of the cache - more often by the OS.
This also seems to be an issue on FreeBSD 13 using ZFS, qBittorrent 4.4 and libtorrent 2.0.5 I also see a huge spike if I trigger a force recheck.
This also seems to be an issue on FreeBSD 13 using ZFS, qBittorrent 4.4 and libtorrent 2.0.5
I'm under the impression that the BSD pagecache is similar to Linux in that there isn't a separate cache for files (like there is on windows, where the memory map needs to be flushed to the disk cache, and then flushed to disk).
Is this what you experience?
I would expect this problem to be especially pronounced when downloading to a slow drive, like a USB attached drive or a spinning disk.
I believe old linux kernels used to have this problem, before a lot of effort was put into the page cache flushing strategy.
I also see a huge spike if I trigger a force recheck.
"spike" in... CPU usage, disk I/O or RAM usage?
If the file size is lets say 4Gb, the complete file (in this case) gets stored into memory (RES using top) and memory gets released once it's in seeding mode.
If I trigger a forced recheck I can see RES jumping up to 4Gb for a very short while before going down to expected levels.
Tested with: https://cdimage.debian.org/debian-cd/current/amd64/bt-dvd/debian-11.2.0-amd64-DVD-1.iso.torrent
I believe that's intended behavior, as long as the system as a whole stays responsive.
There's no way to limit memory usage without hacking the code? I'm not sure this is a great strategy as I tried the same file on a 3Gb VM and that resulted in swap usage. I know it's not the end of the world however I wonder what would happen on a system without swap and this will most likely also trigger any kind of system monitoring daemon. Force recheck seems to adapt to lower amount of RAM but as the above it seems to trigger swap usage.
I tried the same file on a 3Gb VM that resulted in swap usage.
"Memory mapped file" should be swapped with its source file and not with the regular swap file/partition, shouldn't it?
yeah, it's not obvious why memory would be evicted to swap file. Surely the memory mapped files opened by libtorrent would be evicted directly to the destination file. I suppose if the system is running low on RAM, and it identifies libtorrent's files as high priority, it may evict some pages it deems lower priority to swap. It seems a bit odd though since dirty pages backed by a (non-swap) flle seems like a much better page to evict, since there's a chance you won't need to pull it back in again. evicting anonymous memory to swap is virtually guaranteed to have to be swapped back in again.
There's madvise(MADV_DONTNEED) to lower the priority of memory pages, but according to the man page, it seems to be a bit too aggressive (especially the second sentence).
Allows the VM system to decrease the in-memory priority of pages in the specified range. Additionally future references to this address range will incur a page fault.
A bit offtopic: @arvidn I know that you have a lot in your plate now, but please consider "finalizing" the Windows portion soon. Aka torrents with many files downloading in parallel won't regularly flush to disk yet.
A bit offtopic: @arvidn I know that you have a lot in your plate now, but please consider "finalizing" the Windows portion soon. Aka torrents with many files downloading in parallel won't regularly flush to disk yet.
I was under the impression that his had been fixed already, in 843074e87461e5a9a46851e123ebe4950ea90435 and 450ded9fac1dd6babb0786037c23bceccab78487
@diizzyy you could also try setting the close_file_interval
to something non-zero, to force close + flush files periodically.
I was under the impression that his had been fixed already, in 843074e and 450ded9
The first one just flushes the dirtiest file, which might leave other active files in memory for longer. The second one is interesting. We might try setting this in qbt to something smaller. Apparently it still causes problems to users. See https://github.com/qbittorrent/qBittorrent/issues/15961 and https://qbforums.shiki.hu/viewtopic.php?t=9770
Also another issue with the windows cache: If you start rechecking a big torrent and it will fill up all the RAM. The system becomes laggy. Windows starts auto-flushing the files when RAM usage is at its limits but you still experience system lag.
This solution is not good! Error still exists! see Qbitorrent 4.4.0
This solution is not good! Error still exists! see Qbitorrent 4.4.0
the "error" refers to the cache not being flushed? or to the pauses caused by flushing whole files at a time?
My impression is that the first problem has been traded for the second
@arvidn Setting close_file_interval does indeed free memory however the main issue will still remain?
@diizzyy The title of this ticket is "Write cache doesn't flush to disk", it that's not the main issue, what is?
If you have a fast enough connection you'll still see memory exhaustion no?
Please provide the following information
libtorrent version (or branch): RC_2_0 f4d4528b89bdbcef80efa8f5b99cc0e0f92226cb
platform/architecture: Windows 7 x64 sp1
compiler and compiler version: msvc2017
please describe what symptom you see, what you would expect to see instead and how to reproduce it.
To better observe this problem you need a torrent with one big file (eg 10-16 GB) and a fairly fast connection (eg 100Mbps). My system has 16GB RAM. I doesn't matter if I enable OS cache or not, the downloaded data seem to reside in RAM for far too long. While the file downloads I observe that both the
Working Set
of qbittorrent and the system RAM usage go constantly up. I assume this is due to the OS caching. However it doesn't seem to flush to disk in regular intervals. Minutes have passed, GB of data have been downloaded, but the flushing hasn't happened. Let's assume I have a file manager windows open (explorer.exe) and I navigate to the file. No matter how many times I open the file properties itssize on disk
doesn't change. There are 2 ways I have coerced it to flush to disk:Open Containing Folder
or double click on file to launch the associated media player. These actions basically call a shell API to do the work. But somehow also make Windows finally flush to disk.From the little documentation online about the Windows file cache it seems that every second it would commit 1/8 of the cached data to disk. But it doesn't happen with RC_2_0.
This can have serious effects on end users:
Furthermore, I also tested against latest RC_1_2. This doesn't happen there. It also doesn't matter if I enable OS cache or not there. I know that the file i/o subsystem has changed fundamentally between RC_1_2 and RC_2_0 but I write about it in case it matters. Also I have set cache_expiry to 60 seconds and cache size to 65MiB. AFAIK this options don't exist in RC_2_0.
PS: To demonstrate the importance of the problem. I observed this while I had something downloading in the background and I was doing "office work" (browsing, pdf opening, word writing etc) which is simple in terms of disk demand and ram demand. Yet suddenly the system was freezing up randomly. I opened task manager and my 16GB RAM had almost filled up. I saw that the disk activity was up. It took at least 20 minutes for things to be usable again.