ethersphere / bee

Bee is a Swarm client implemented in Go. It’s the basic building block for the Swarm network: a private; decentralized; and self-sustaining network for permissionless publishing and access to your (application) data.
https://www.ethswarm.org
BSD 3-Clause "New" or "Revised" License
1.46k stars 337 forks source link

Running out of disk space #1258

Closed alsakhaev closed 1 year ago

alsakhaev commented 3 years ago

Summary

We have a bunch of troubles running our bee node serving the swarm downloader (out hackathon project). 1) We a running a bee node under https://swarm.dapplets.org 2) the node takes all available space on HDD and obviously starts rejecting files we are uploading. Waiting for swarm hash either fails immediately or takes too long time. We have set a db-capacity: 2621440 chunks (aprox. 10gb) + 5GB freespace, but goes fully consumed.

Steps to reproduce

1) Created VPS server in Hetzner with following hardware (CX11, 1 VCPU, 2 GB RAM, 20 GB) with Ubuntu 20.04.2 LTS 2) Installed Bee via wget https://github.com/ethersphere/bee/releases/download/v0.5.0/bee_0.5.0_amd64.deb sudo dpkg -i bee_0.5.0_amd64.deb 3) Configured like in the config bellow 4) Installed nginx web-server and configured reverse proxy from https://swarm.dapplets.org to http://localhost:1633 with SSL of let's encrypt 5) Upload files to the node via POST https://swarm.dapplets.org/files/ 6) After a while disk space runs out

Expected behavior

I expect to see 5gb freespace :)

Actual behavior

1) Disk space runs out 2) in the log a lot of errors about it 3) cannot upload a file, node responses HTTP 500 internal server error

Config /etc/bee/bee.yaml

Uncommented lines from config file:

api-addr: 127.0.0.1:1633
clef-signer-endpoint: /var/lib/bee-clef/clef.ipc
config: /etc/bee/bee.yaml
data-dir: /var/lib/bee
db-capacity: 2621440
gateway-mode: true
password-file: /var/lib/bee/password
swap-enable: true
swap-endpoint: https://rpc.slock.it/goerli
Eknir commented 3 years ago

Thank you for reporting the bug! We will have a look into it shortly

jpritikin commented 3 years ago

Tangential suggestion: It should be pretty easy to calculate an approximate estimate of the maximum possible disk usage from db-capacity and compare that to the actual disk space available. If these are grossly out-of-whack then bee should log a suggestion of a new value for db-capacity that will not exceed the available disk space.

jpritikin commented 3 years ago

I can reproduce this problem. It looks like disk space accounting does not include uploaded files. When I restart swarm, immediately a ton of disk space is freed up as db-capacity is re-applied.

jpritikin commented 3 years ago

Hm, bee isn't releasing all of the disk space even after a restart,

root@salvia /o/bee# grep db-cap /etc/bee/bee.yaml
db-capacity: 5000000
root@salvia /o/bee# ls
keys/  localstore/  password  statestore/
root@salvia /o/bee# du -h -s .
111G    .
acud commented 3 years ago

Please try to give as much information as to what you have done prior to this problem surfacing. I'm trying to reproduce this but so far no luck.

jpritikin commented 3 years ago

Can you say if at any point your db capacity was set above 5mil?

Yes, I tried 10mil. Once I realized that disk space management wasn't working then I reduced back to 5mil.

Did you play around with the size?

On one node, I probably uploaded faster than sync'ing. For example, maybe I uploaded 30G of data to the node very quickly and then waited for it to sync.

I'm trying to reproduce this but so far no luck.

If you can provide some guidance about how to not trigger the issue then that would also help. I gather that I shouldn't mess with the db-capacity setting. Also, I should not uploaded too fast?

I was trying to find where the limits were, to help with testing, but I am content to play within expected user behavior too.

I'm curious to hear from @alsakhaev too

significance commented 3 years ago

@Eknir @acud

message from bee-support

mfw78: I've found on 3x containers that I've run, all of them do not respect the db-capacity limit.

sig: are you uploading any data to them?

mfw78: No

RealEpikur commented 3 years ago

+1: started a node on raspi with 32gb sd card, ran out of disk space after 10hrs

ronald72-gh commented 3 years ago

+1: have set up docker-based nodes and all of their localstores have easily surpassed the db-capacity limit and use between 30Gb and 40Gb now

mfw78 commented 3 years ago

+1: Running multiple bees in Kubernetes containers. Each bee exhausts it's disk space allocation (doubling the db capacity has no effect besides chewing more space, and consequently exceeding).

Eknir commented 3 years ago

Thanks all, for the comments and reports. We are releasing soon and included several improvements that aim to address this issue. We would greatly appreciate if you could try it out and report back here.

mfw78 commented 3 years ago

I can confirm that running 0.5.3 the db-capacity seems to be more respected, with 6 nodes that I'm running doing the following disk usage: 28G / 21G / 28G / 28G / 29G / 27G

Eknir commented 3 years ago

This issue can be reliably reproduced with a rPI

luowenw commented 3 years ago
截屏2021-04-08 下午1 00 00

I am running 0.5.3 using the default db-capacity. I can see the bee is doing garbage collecting in the same time of consuming more space. Once the garbage collecting fall off and the disk usage reached 100% then everything not working anymore. The log will keep reporting No Space Left and the garbage collecting will also stop to work.

Eknir commented 3 years ago

@zelig @acud you guys are working on this as part of the postage stamps? Shall I assign this issue to the current sprint?

Eknir commented 3 years ago

@zelig @acud you guys are working on this as part of the postage stamps? Shall I assign this issue to the current sprint?

ethernian commented 3 years ago

the bug has a severe impact on the entire network because people are just purging the localstore of their nodes, causing data loss. No way to release the bee without killing the bug.

github-actions[bot] commented 2 years ago

This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 5 days.

tmm360 commented 2 years ago

Any news on this? Issue is still there. I'm using default configuration on disk space (BEE_CACHE_CAPACITY=1000000), it should be ~4GB, but this is my disk space graphic.

Immagine 2022-01-22 214259

I didn't perform any upload on node. It's a VERY important issue to fix.

acud commented 2 years ago

It should be resolved with the latest release. However the problem is multi tiered so shipping a database migration that would fix the problem which is already exacerbated on some nodes was not trivial to do. If you db nuke your node and allow it to resync, the problem should be resolved.

ldeffenb commented 2 years ago

Any plans to publish guidance on this? In particular, how to detect if the issue exists within a node so that we don't just start nuking everything and dropping retrievability on chunks already stored in the swarm.

tmm360 commented 2 years ago

I've db nucked two of my nodes, let's see how it will evolve.

agazso commented 2 years ago

@ldeffenb @tmm360 do you still experience this issue?

jpritikin commented 2 years ago

Disk usage seems stable and not growing. Yesterday I installed bee 1.5.0-dda5606e. The sharky migration finished, but disk consumption doubled. How can I delete the old database and expire the blocks that shouldn't be stored?

jpritikin commented 2 years ago

I deleted the old database using bee nuke. Two weeks ago, disk usage was back to zero. As I write, disk usage is back up to 30GiB.

jpritikin commented 2 years ago

Good amount of traffic today. Disk usage is up to 34.8GiB.

jpritikin commented 2 years ago

I upgraded to 1.5.1 today. Disk usage is up to 47.9GiB. At this rate, I'll have to nuke my db again in a few weeks.

ldeffenb commented 2 years ago

If you do the following command, substituting the proper IP and debug port, what value is displayed? It should be 2 or 3 on testnet and 8 or 9 on mainnet.

curl http://127.0.0.1:1635/topology | jq .depth

jpritikin commented 2 years ago

I'm on mainnet. Currently it says 6

ldeffenb commented 2 years ago

Are you sure you have inbound connections open and forwarded to your p2p-addr (default 1634)? With a depth of only 6, it seems that you may not be receiving inbound connections. A shallower depth may cause your node to believe it needs to store more chunks as the neighborhood is larger.

ldeffenb commented 2 years ago

Try the following command. You should see at least one, hopefully more, inbound connections. You may also need to set nat-addr if you are behind a NAT without UPNP capability.

curl http://127.0.0.1:1635/topology | jq . | grep "inbound"

ldeffenb commented 2 years ago

Also, the following command should show "Public":

curl http://127.0.0.1:1635/topology | jq .reachability

jpritikin commented 2 years ago

curl http://127.0.0.1:1635/topology | jq .reachability

"Public"

curl http://127.0.0.1:1635/topology | jq . | grep "inbound" | wc -l

9

Are you sure you have inbound connections open and forwarded to your p2p-addr (default 1634)?

Yeah, pretty sure. I have uPNP enabled on my OpenWRT router.

ldeffenb commented 2 years ago

Ok, I'm out of ideas now. Sorry, but the node is going to store what it thinks it needs to store. Nuking your DB periodically will just chew up your bandwidth and strain your neighborhood to push it all back to you, not to mention risking actually dropping the only copy of some chunks that your node was supposed to store. Unless there's still a lurking issue with that somewhere that hasn't been uncovered and isn't visible with the metrics we currently have available.

ldeffenb commented 2 years ago

One more final thought after having gone back through and re-read everything. If you are still uploading through this node, are you asking it to pin the content on the upload? Pinned content is not garbage-collected and is also not counted against the db-capacity configuration. But it is all dropped on a db nuke, as far as I can tell.

jpritikin commented 2 years ago

Nuking your DB periodically will just chew up your bandwidth and strain your neighborhood

OK, I'm happy to wait until my disk fills up. Maybe devs will figure out a solution by then.

If you are still uploading through this node

I am not. I haven't tried to upload for months.

acud commented 2 years ago

@jpritikin I am adding a new db indices command to the binary so we can have a bit more info about the problem. Could you please build https://github.com/ethersphere/bee/pull/2924, run the bee db indices --data-dir <path-to-data-dir> and paste the output here?

acud commented 2 years ago

also, @jpritikin please don't use that built version to run bee normally. use the current stable version to run bee with bee start (there are still some things that we're ironing out before the next release)

jpritikin commented 2 years ago

Here is the output:

root@glow:~# ./bee db indices --data-dir /opt/bee
INFO[2022-05-02T17:19:40-04:00] getting db indices with data-dir at /opt/bee 
INFO[2022-05-02T17:19:40-04:00] database capacity: 1000000 chunks (approximately 20.3GB) 
INFO[2022-05-02T17:20:29-04:00] localstore index: gcSize, value: 950674      
INFO[2022-05-02T17:20:29-04:00] localstore index: retrievalAccessIndex, value: 1896105 
INFO[2022-05-02T17:20:29-04:00] localstore index: postageChunksIndex, value: 1896105 
INFO[2022-05-02T17:20:29-04:00] localstore index: retrievalDataIndex, value: 1896105 
INFO[2022-05-02T17:20:29-04:00] localstore index: gcIndex, value: 948228     
INFO[2022-05-02T17:20:29-04:00] localstore index: postageRadiusIndex, value: 703 
INFO[2022-05-02T17:20:29-04:00] localstore index: reserveSize, value: 957416 
INFO[2022-05-02T17:20:29-04:00] localstore index: pullIndex, value: 1895775  
INFO[2022-05-02T17:20:29-04:00] localstore index: pinIndex, value: 947877    
INFO[2022-05-02T17:20:29-04:00] localstore index: postageIndexIndex, value: 5173925 
INFO[2022-05-02T17:20:29-04:00] localstore index: pushIndex, value: 0        
INFO[2022-05-02T17:20:29-04:00] done. took 48.935545587s                     
acud commented 2 years ago

w00t. and this takes how many gigs? can we have a du -d 1 -h of the localstore directory?

jpritikin commented 2 years ago

Here you go,

root@glow:/opt/bee/localstore# du -d 1 -h
42G ./sharky
43G .
acud commented 2 years ago

can you also provide the output of your /topology endpoint on the debug api?

jpritikin commented 2 years ago

Here is topology output, topo.txt

acud commented 2 years ago

Thanks. Would you be able to post the free_* files from the localstore/sharky directory?

jpritikin commented 2 years ago

Like this? https://drive.google.com/file/d/1pSXKBGzcYKtAqRWFUDQb4-Z4uHiJUqSk/view?usp=sharing

acud commented 2 years ago

thanks @jpritikin this was very helpful. i have some possible direction on the problem. since you can reproduce the problem, could you try the following please?

many thanks in advance!

jpritikin commented 2 years ago

Okay, I'm running this code.

jpritikin commented 2 years ago

Disk usage is up to 21.5GiB.

acud commented 2 years ago

@jpritikin so if I understand correctly everything is OK running with this fix?

jpritikin commented 2 years ago

so if I understand correctly everything is OK running with this fix?

No, I commented 4 hours ago because that's how long it took for the disk to fill up to that point. The test begins now, not ends.

Case in point, the disk usage is now up to 26GiB. So I would say that the fix has failed to cure the problem. :crying_cat_face: