NebulousLabs / Sia-UI

A Graphical Frontend for Sia - https://sia.tech
MIT License
389 stars 119 forks source link

Sia Loads Entire Consensus Into Active Memory #699

Open JamesM424 opened 7 years ago

JamesM424 commented 7 years ago

Hello Sia team,

In discussion with someone else today, I'm filing this as a bug for visibility.

On all (3) of my Windows machines, the entire consensus.db is loaded into active memory. I'm not in the office today, so I can't grab screenshots off the 2012R2 renting server, so I'll add screenshots from both Windows 7 x64 clients.

The other user reported that Sia had a footprint of <300MB

When I close Sia-UI, the memory is properly released.

For the purpose of reporting, I am using "RamMap". An application released by Microsoft's Sysinternals to visualize memory analysis. In sample checking the physical addresses used by the consensus.DB, they are NOT consecutive physical address which shows that it slowly took the consensus and loaded it into memory. And also on launch we would see an instant spike in memory utilization which I also do not see.

On system 1 - Sia (and it's processes) were using a combined total of 4.5GB of active memory. Immediately before closing Sia-UI memory utilization was 8.86. After closing Sia-UI, memory allocation was 3.27.

On system 2 - Sia (and it's processes) were using a combined total of 4.3GB of active memory. Immediately before closing Sia-UI memory utilization was 11.8GB. After closing Sia-UI, memory allocation was 5.96.

I found it quicker to reproduce this if activity to scan the entire blockchain was initiated - such as adding an aux wallet, using recover to find missing transactions or upgrading from an older version. You can watch the entire blockchain get loaded from standby to active, and never freed.

server tm server-rammap sys1 tm sys1-rammap

lukechampine commented 7 years ago

The consensus db is bolt, which mmap's the entire database. The pages should eventually be released back to the OS. You could test this by launching another memory-hungry program. The expected behavior is that the pages used to mmap consensus.db will be released to the other program.

Still, it is unfortunate that Sia appears to be so memory-hungry, even if the actual behavior is acceptable. I'm not sure what we can really do about this other than switching from bolt to a new database.

JamesM424 commented 7 years ago

when I opened bug #1918, to stress the memory I used a program I made to continuously write to all memory. This would have caused a big flush to disk. I don't recall Sia correcting itself but will give it a try tomorrow.

On Aug 16, 2017 6:21 PM, "Luke Champine" notifications@github.com wrote:

The consensus db is bolt https://github.com/boltdb/bolt, which mmap's the entire database. The pages should eventually be released back to the OS. You could test this by launching another memory-hungry program. The expected behavior is that the pages used to mmap consensus.db will be released to the other program.

Still, it is unfortunate that Sia appears to be so memory-hungry, even if the actual behavior is acceptable. I'm not sure what we can really do about this other than switching from bolt to a new database.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/NebulousLabs/Sia-UI/issues/699#issuecomment-322915842, or mute the thread https://github.com/notifications/unsubscribe-auth/AbQiu1I-dmHcoItVnsduKyLDctWS1fb1ks5sY2tggaJpZM4O5hny .

lukechampine commented 7 years ago

fwiw, I just ran a test locally with a small program that continuously allocates memory. I observed the expected behavior: siad's resident memory shrank as the other program's memory grew. In total it shrank from about 3 GB to 200 MB. siad also consumes about that much when consensus.db isn't scanned, e.g. when you do a fresh restart of siad after fully syncing the blockchain.

Scanning consensus.db seems to reliably reproduce this, as does the initial blockchain download. Perhaps there are ways we can perform these actions without causing boltdb to grab so many pages.

JamesM424 commented 7 years ago

I didn't see this correct itself last month with my app apparently it was 32- So after 4GB it just went into a run-away. I'm apparently older than I thought. Rebuilding in 64 resolved that.

I can confirm a majority of loaded memory is priority <3 and as such will get dumped if I take more than is currently available but some is loaded at priority 5 and won't be handled by the GC. Also, over night I noticed there's an amount of creep - about 900MB overnight (which was not super-fetch).

If desktop user only has 4GB of memory, this could lead to other processes being flushed before it gets freed.

This may just be a minor annoyance for desktop users, but for ESXi use-cases it's a bit more. We have to allocate as much memory on the host as is expected to occur on the VM. This generally forces us to over-allocate chunks of memory in anticipation to keep the memory allocation range consistent. HyperVisor has the ability to give more memory to a VM (outside of pre-allocated range); but as the free memory isn't publicly accessible, it can't take it back. The only option would be to end the guest instance.

lukechampine commented 7 years ago

If memory is restricted, the mmap should just be forced to keep fewer pages in RAM. I'd be interested to see if there's much performance impact when restricting Sia to (e.g.) 1 GB of memory. If restricting to 1 GB causes Sia to crash, that's definitely something that we want to look into.

JamesM424 commented 7 years ago

Valid point. I took our old emergency vSphere home some months ago to do some testing. That is something I can easily setup with hard-limit memory management. Give me a few days to mount it in the rack at home and I'll try some stuff out.