Closed kiahmed closed 5 years ago
You can set up online_delete, but which database(s) are actually eating up that data?
online_delete=1 in config file? here is what he sizes looks like
I haven't used RocksDB in a while, but yeah, that seems like most of the space is used up there.
What's your full config file and the output of server_info
?
I already had online_delete=1000000001 and the server info is following:
{ "result": { "info": { "build_version": "1.1.0-b2", "complete_ledgers": "9886340-9887754", "fetch_pack": 755, "hostid": "0cd8f0411338", "io_latency_ms": 1, "jq_trans_overflow": "0", "last_close": { "converge_time_s": 1.999, "proposers": 16 }, "load": { "job_types": [ { "avg_time": 6, "job_type": "publishAcqLedger", "peak_time": 41, "per_second": 6 }, { "job_type": "untrustedValidation", "per_second": 3 }, { "job_type": "ledgerRequest", "per_second": 1 }, { "job_type": "untrustedProposal", "per_second": 1 }, { "job_type": "ledgerData", "peak_time": 2, "per_second": 5 }, { "in_progress": 1, "job_type": "clientCommand" }, { "job_type": "transaction", "per_second": 10 }, { "job_type": "batch", "per_second": 8 }, { "avg_time": 1, "job_type": "advanceLedger", "peak_time": 43, "per_second": 3 }, { "job_type": "trustedValidation", "peak_time": 2, "per_second": 3 }, { "job_type": "writeObjects", "peak_time": 1, "per_second": 28 }, { "job_type": "trustedProposal", "per_second": 5 }, { "job_type": "peerCommand", "per_second": 599 }, { "job_type": "diskAccess", "peak_time": 2, "per_second": 2 }, { "job_type": "processTransaction", "per_second": 10 }, { "job_type": "SyncReadNode", "per_second": 15 }, { "job_type": "AsyncReadNode", "per_second": 1 }, { "job_type": "WriteNode", "per_second": 67 } ], "threads": 6 }, "load_factor": 1, "peer_disconnects": "0", "peer_disconnects_resources": "0", "peers": 11, "pubkey_node": "n9K7WceQjoxeZxjTon4TTXtLm2EiJckAJaXyS7X3UWa1cvshrm3F", "pubkey_validator": "none", "server_state": "full", "state_accounting": { "connected": { "duration_us": "122830986", "transitions": 1 }, "disconnected": { "duration_us": "1407365", "transitions": 1 }, "full": { "duration_us": "153632364", "transitions": 1 }, "syncing": { "duration_us": "61042911", "transitions": 1 }, "tracking": { "duration_us": "0", "transitions": 1 } }, "time": "2018-Jun-10 21:02:09.014049", "uptime": 339, "validated_ledger": { "base_fee_xrp": 0.00001, "hash": "E8DD70438D9835FB3412CCF1C3B573F5A78449F106E186D59A9B74BB56B43C03", "reserve_base_xrp": 20, "reserve_inc_xrp": 5, "seq": 9887754 }, "validation_quorum": 11, "validator_list_expires": "2018-Jun-18 00:00:00.000000000" }, "status": "success" } }
online_delete=1000000001
means you want to keep a billion ledgers (currently there are about 39 million, so you're planning to store history for about a century) before you start removing some. Your uptime is only 5 minutes and the <2k ledgers that are available to you according to that info file are not that large. What's in your config file, that seems to be the root of your problems?
well, i stopped it because of the hard disk issue..just restarted to show you the info. So, will give a try with lower number and see what happens
with lower delete and ledger_history its been running for 12hrs now taking 1.5 gig. When I try to get an account info tho, it gives new error "result": { "error": "noNetwork", "error_code": 17, "error_message": "InsufficientNetworkMode", "request": { "account": "r3AeoS4gedy7YY6icSL61wGMZfjwMoBV4y", "command": "account_info", "ledger_index": "current", "queue": true, "strict": true }, "status": "error" } }
running with online_delete=256, fetch_depth=128 and node_size = tiny still disk size is 38 gig,..its hard to keep up. any workaround to keep it low?
Excessive disk space usage can occur under some circumstances; we are working to improve resource utilization and reduce disk space. In the meantime, you could simply shut down your server, remove the database files and restart.
is there a way to prune testnet data or just keep the local data? Running a test node chewed up 40 gig and stopped running because disk space. This is totally unnecessary. When you run a node in the cloud disk space matters and could actually cost some good bucks