Closed alsakhaev closed 1 year ago
Thank you for reporting the bug! We will have a look into it shortly
Tangential suggestion: It should be pretty easy to calculate an approximate estimate of the maximum possible disk usage from db-capacity
and compare that to the actual disk space available. If these are grossly out-of-whack then bee should log a suggestion of a new value for db-capacity
that will not exceed the available disk space.
I can reproduce this problem. It looks like disk space accounting does not include uploaded files. When I restart swarm, immediately a ton of disk space is freed up as db-capacity
is re-applied.
Hm, bee isn't releasing all of the disk space even after a restart,
root@salvia /o/bee# grep db-cap /etc/bee/bee.yaml
db-capacity: 5000000
root@salvia /o/bee# ls
keys/ localstore/ password statestore/
root@salvia /o/bee# du -h -s .
111G .
5mil
?Please try to give as much information as to what you have done prior to this problem surfacing. I'm trying to reproduce this but so far no luck.
Can you say if at any point your db capacity was set above 5mil?
Yes, I tried 10mil. Once I realized that disk space management wasn't working then I reduced back to 5mil.
Did you play around with the size?
On one node, I probably uploaded faster than sync'ing. For example, maybe I uploaded 30G of data to the node very quickly and then waited for it to sync.
I'm trying to reproduce this but so far no luck.
If you can provide some guidance about how to not trigger the issue then that would also help. I gather that I shouldn't mess with the db-capacity setting. Also, I should not uploaded too fast?
I was trying to find where the limits were, to help with testing, but I am content to play within expected user behavior too.
I'm curious to hear from @alsakhaev too
@Eknir @acud
message from bee-support
mfw78: I've found on 3x containers that I've run, all of them do not respect the db-capacity limit.
sig: are you uploading any data to them?
mfw78: No
+1: started a node on raspi with 32gb sd card, ran out of disk space after 10hrs
+1: have set up docker-based nodes and all of their localstores have easily surpassed the db-capacity limit and use between 30Gb and 40Gb now
+1: Running multiple bees in Kubernetes containers. Each bee exhausts it's disk space allocation (doubling the db capacity has no effect besides chewing more space, and consequently exceeding).
Thanks all, for the comments and reports. We are releasing soon and included several improvements that aim to address this issue. We would greatly appreciate if you could try it out and report back here.
I can confirm that running 0.5.3 the db-capacity
seems to be more respected, with 6 nodes that I'm running doing the following disk usage: 28G / 21G / 28G / 28G / 29G / 27G
This issue can be reliably reproduced with a rPI
I am running 0.5.3 using the default db-capacity. I can see the bee is doing garbage collecting in the same time of consuming more space. Once the garbage collecting fall off and the disk usage reached 100% then everything not working anymore. The log will keep reporting No Space Left and the garbage collecting will also stop to work.
@zelig @acud you guys are working on this as part of the postage stamps? Shall I assign this issue to the current sprint?
@zelig @acud you guys are working on this as part of the postage stamps? Shall I assign this issue to the current sprint?
the bug has a severe impact on the entire network because people are just purging the localstore
of their nodes, causing data loss. No way to release the bee without killing the bug.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 5 days.
Any news on this? Issue is still there. I'm using default configuration on disk space (BEE_CACHE_CAPACITY=1000000), it should be ~4GB, but this is my disk space graphic.
I didn't perform any upload on node. It's a VERY important issue to fix.
It should be resolved with the latest release. However the problem is multi tiered so shipping a database migration that would fix the problem which is already exacerbated on some nodes was not trivial to do. If you db nuke
your node and allow it to resync, the problem should be resolved.
Any plans to publish guidance on this? In particular, how to detect if the issue exists within a node so that we don't just start nuking everything and dropping retrievability on chunks already stored in the swarm.
I've db nucked two of my nodes, let's see how it will evolve.
@ldeffenb @tmm360 do you still experience this issue?
Disk usage seems stable and not growing. Yesterday I installed bee 1.5.0-dda5606e. The sharky migration finished, but disk consumption doubled. How can I delete the old database and expire the blocks that shouldn't be stored?
I deleted the old database using bee nuke
. Two weeks ago, disk usage was back to zero. As I write, disk usage is back up to 30GiB.
Good amount of traffic today. Disk usage is up to 34.8GiB.
I upgraded to 1.5.1 today. Disk usage is up to 47.9GiB. At this rate, I'll have to nuke my db again in a few weeks.
If you do the following command, substituting the proper IP and debug port, what value is displayed? It should be 2 or 3 on testnet and 8 or 9 on mainnet.
curl http://127.0.0.1:1635/topology | jq .depth
I'm on mainnet. Currently it says 6
Are you sure you have inbound connections open and forwarded to your p2p-addr (default 1634)? With a depth of only 6, it seems that you may not be receiving inbound connections. A shallower depth may cause your node to believe it needs to store more chunks as the neighborhood is larger.
Try the following command. You should see at least one, hopefully more, inbound connections. You may also need to set nat-addr if you are behind a NAT without UPNP capability.
curl http://127.0.0.1:1635/topology | jq . | grep "inbound"
Also, the following command should show "Public":
curl http://127.0.0.1:1635/topology | jq .reachability
curl http://127.0.0.1:1635/topology | jq .reachability
"Public"
curl http://127.0.0.1:1635/topology | jq . | grep "inbound" | wc -l
9
Are you sure you have inbound connections open and forwarded to your p2p-addr (default 1634)?
Yeah, pretty sure. I have uPNP enabled on my OpenWRT router.
Ok, I'm out of ideas now. Sorry, but the node is going to store what it thinks it needs to store. Nuking your DB periodically will just chew up your bandwidth and strain your neighborhood to push it all back to you, not to mention risking actually dropping the only copy of some chunks that your node was supposed to store. Unless there's still a lurking issue with that somewhere that hasn't been uncovered and isn't visible with the metrics we currently have available.
One more final thought after having gone back through and re-read everything. If you are still uploading through this node, are you asking it to pin the content on the upload? Pinned content is not garbage-collected and is also not counted against the db-capacity configuration. But it is all dropped on a db nuke, as far as I can tell.
Nuking your DB periodically will just chew up your bandwidth and strain your neighborhood
OK, I'm happy to wait until my disk fills up. Maybe devs will figure out a solution by then.
If you are still uploading through this node
I am not. I haven't tried to upload for months.
@jpritikin I am adding a new db indices
command to the binary so we can have a bit more info about the problem. Could you please build https://github.com/ethersphere/bee/pull/2924, run the bee db indices --data-dir <path-to-data-dir>
and paste the output here?
also, @jpritikin please don't use that built version to run bee normally. use the current stable version to run bee with bee start
(there are still some things that we're ironing out before the next release)
Here is the output:
root@glow:~# ./bee db indices --data-dir /opt/bee
INFO[2022-05-02T17:19:40-04:00] getting db indices with data-dir at /opt/bee
INFO[2022-05-02T17:19:40-04:00] database capacity: 1000000 chunks (approximately 20.3GB)
INFO[2022-05-02T17:20:29-04:00] localstore index: gcSize, value: 950674
INFO[2022-05-02T17:20:29-04:00] localstore index: retrievalAccessIndex, value: 1896105
INFO[2022-05-02T17:20:29-04:00] localstore index: postageChunksIndex, value: 1896105
INFO[2022-05-02T17:20:29-04:00] localstore index: retrievalDataIndex, value: 1896105
INFO[2022-05-02T17:20:29-04:00] localstore index: gcIndex, value: 948228
INFO[2022-05-02T17:20:29-04:00] localstore index: postageRadiusIndex, value: 703
INFO[2022-05-02T17:20:29-04:00] localstore index: reserveSize, value: 957416
INFO[2022-05-02T17:20:29-04:00] localstore index: pullIndex, value: 1895775
INFO[2022-05-02T17:20:29-04:00] localstore index: pinIndex, value: 947877
INFO[2022-05-02T17:20:29-04:00] localstore index: postageIndexIndex, value: 5173925
INFO[2022-05-02T17:20:29-04:00] localstore index: pushIndex, value: 0
INFO[2022-05-02T17:20:29-04:00] done. took 48.935545587s
w00t. and this takes how many gigs? can we have a du -d 1 -h
of the localstore directory?
Here you go,
root@glow:/opt/bee/localstore# du -d 1 -h
42G ./sharky
43G .
can you also provide the output of your /topology
endpoint on the debug api?
Thanks. Would you be able to post the free_*
files from the localstore/sharky
directory?
thanks @jpritikin this was very helpful. i have some possible direction on the problem. since you can reproduce the problem, could you try the following please?
git checkout v1.5.1
or the last stable release that you're running (if git complains and you can't see the tag do a git fetch --tags
)many thanks in advance!
Disk usage is up to 21.5GiB.
@jpritikin so if I understand correctly everything is OK running with this fix?
so if I understand correctly everything is OK running with this fix?
No, I commented 4 hours ago because that's how long it took for the disk to fill up to that point. The test begins now, not ends.
Case in point, the disk usage is now up to 26GiB. So I would say that the fix has failed to cure the problem. :crying_cat_face:
Summary
We have a bunch of troubles running our bee node serving the swarm downloader (out hackathon project). 1) We a running a bee node under https://swarm.dapplets.org 2) the node takes all available space on HDD and obviously starts rejecting files we are uploading. Waiting for swarm hash either fails immediately or takes too long time. We have set a db-capacity: 2621440 chunks (aprox. 10gb) + 5GB freespace, but goes fully consumed.
Steps to reproduce
1) Created VPS server in Hetzner with following hardware (CX11, 1 VCPU, 2 GB RAM, 20 GB) with Ubuntu 20.04.2 LTS 2) Installed Bee via
wget https://github.com/ethersphere/bee/releases/download/v0.5.0/bee_0.5.0_amd64.deb sudo dpkg -i bee_0.5.0_amd64.deb
3) Configured like in the config bellow 4) Installed nginx web-server and configured reverse proxy from https://swarm.dapplets.org to http://localhost:1633 with SSL of let's encrypt 5) Upload files to the node via POST https://swarm.dapplets.org/files/ 6) After a while disk space runs outExpected behavior
I expect to see 5gb freespace :)
Actual behavior
1) Disk space runs out 2) in the log a lot of errors about it 3) cannot upload a file, node responses HTTP 500 internal server error
Config /etc/bee/bee.yaml
Uncommented lines from config file: