XRPLF / rippled

Decentralized cryptocurrency blockchain daemon implementing the XRP Ledger protocol in C++
https://xrpl.org
ISC License
4.58k stars 1.48k forks source link

"complete_ledgers" : "empty","network_ledger" : "waiting" #2433

Closed jokeryg closed 6 years ago

jokeryg commented 6 years ago

server_info `[root@jinpai etc]# rippled server_info Loading: "/opt/ripple/etc/rippled.cfg" 2018-Mar-15 08:25:53 HTTPClient:NFO Connecting to 127.0.0.1:5005

{ "result" : { "info" : { "build_version" : "0.90.0", "closed_ledger" : { "age" : 3, "base_fee_xrp" : 1e-05, "hash" : "269CE042D096AAC4DD082D23304742600EF0B1F1B3B569954E08FB8097017C78", "reserve_base_xrp" : 200, "reserve_inc_xrp" : 50, "seq" : 151 }, "complete_ledgers" : "empty", "fetch_pack" : 8, "hostid" : "jinpai.server", "io_latency_ms" : 1, "jq_trans_overflow" : "0", "last_close" : { "converge_time_s" : 4, "proposers" : 0 }, "load" : { "job_types" : [ { "job_type" : "untrustedValidation", "peak_time" : 1 }, { "job_type" : "ledgerRequest", "per_second" : 1 }, { "job_type" : "untrustedProposal", "peak_time" : 88, "per_second" : 27, "waiting" : 28 }, { "avg_time" : 4087, "in_progress" : 2, "job_type" : "ledgerData", "peak_time" : 13050, "per_second" : 1, "waiting" : 6 }, { "in_progress" : 2, "job_type" : "clientCommand" }, { "job_type" : "transaction", "per_second" : 2 }, { "job_type" : "advanceLedger", "peak_time" : 16, "per_second" : 10 }, { "job_type" : "fetchTxnData", "per_second" : 34 }, { "avg_time" : 78, "in_progress" : 1, "job_type" : "trustedValidation", "peak_time" : 598, "per_second" : 4 }, { "avg_time" : 33, "in_progress" : 1, "job_type" : "writeObjects", "peak_time" : 453, "per_second" : 2 }, { "avg_time" : 3, "job_type" : "acceptLedger", "peak_time" : 5 }, { "avg_time" : 8, "job_type" : "trustedProposal", "peak_time" : 234, "per_second" : 6 }, { "avg_time" : 86, "job_type" : "heartbeat", "peak_time" : 341 }, { "avg_time" : 1, "job_type" : "peerCommand", "peak_time" : 8, "per_second" : 1191 }, { "avg_time" : 38, "job_type" : "diskAccess", "peak_time" : 518, "per_second" : 1 }, { "avg_time" : 8, "job_type" : "SyncReadNode", "peak_time" : 142, "per_second" : 1 }, { "avg_time" : 1, "job_type" : "AsyncReadNode", "peak_time" : 369, "per_second" : 764 }, { "job_type" : "WriteNode", "per_second" : 1 } ], "threads" : 6 }, "load_factor" : 1, "network_ledger" : "waiting", "peer_disconnects" : "64", "peer_disconnects_resources" : "0", "peers" : 15, "pubkey_node" : "n94ELz6n7dcEp2DfdPZzRBJ147GoUaxBTYyM7qKRRE6F1CNQib2k", "pubkey_validator" : "none", "published_ledger" : "none", "server_state" : "connected", "state_accounting" : { "connected" : { "duration_us" : "1985828721", "transitions" : 1 }, "disconnected" : { "duration_us" : "1215652", "transitions" : 1 }, "full" : { "duration_us" : "0", "transitions" : 0 }, "syncing" : { "duration_us" : "0", "transitions" : 0 }, "tracking" : { "duration_us" : "0", "transitions" : 0 } }, "uptime" : 1987, "validation_quorum" : 15, "validator_list_expires" : "2018-Mar-25 00:00:00" }, "status" : "success" } } `

rippled.cfg ` [server] port_rpc_admin_local port_peer port_ws_admin_local

[port_rpc_admin_local] port = 5005 ip = 127.0.0.1 admin = 127.0.0.1 protocol = http

[port_peer] port = 51235 ip = 0.0.0.0 protocol = peer

[port_ws_admin_local] port = 6006 ip = 127.0.0.1 admin = 127.0.0.1 protocol = ws

[node_size] huge

[node_db] type=RocksDB path=/home/rippled/db/rocksdb open_files=2000 filter_bits=12 cache_mb=256 file_size_mb=8 file_size_mult=2 online_delete=2000 advisory_delete=0

[database_path] /home/rippled/db

[debug_logfile] /home/rippled/log/debug.log

[sntp_servers] time.windows.com time.apple.com time.nist.gov pool.ntp.org

[ips] r.ripple.com 51235 54.84.21.230 51235 54.86.175.122 51235 54.186.248.91 51235 54.186.73.52 51235 184.173.45.38 51235 198.11.206.26 51235 169.55.164.29 51235 174.37.225.41 51235

[peers_max] 100

[validators_file] validators.txt

[validators] n9KPnVLn7ewVzHvn218DcEYsnWLzKerTDwhpofhk4Ym1RUq4TeGw RIP1 n9LFzWuhKNvXStHAuemfRKFVECLApowncMAM5chSCL9R5ECHGN4V RIP2 n94rSdgTyBNGvYg8pZXGuNt59Y5bGAZGxbxyvjDaqD9ceRAgD85P RIP3 n9LeQeDcLDMZKjx1TZtrXoLBLo5q1bR1sUQrWG7tEADFU6R27UBp RIP4 n9KF6RpvktjNs2MDBkmxpJbup4BKrKeMKDXPhaXkq7cKTwLmWkFr RIP5

[validation_quorum] 3

[rpc_startup] { "command": "log_level", "severity": "warning" }

[ssl_verify] 1

`

It's been running for a week.but "complete_ledgers" always "empty" please help me . thanks

jokeryg commented 6 years ago

now server_info

{ "result" : { "info" : { "build_version" : "0.90.0", "closed_ledger" : { "age" : 170, "base_fee_xrp" : 1e-05, "hash" : "123E7553AA61694AFD159572927B6D782ACA6E2331DB096525DEF0AB40D6BC97", "reserve_base_xrp" : 200, "reserve_inc_xrp" : 50, "seq" : 620 }, "complete_ledgers" : "empty", "fetch_pack" : 19881, "hostid" : "jinpai.server", "io_latency_ms" : 1, "jq_trans_overflow" : "0", "last_close" : { "converge_time_s" : 165.999, "proposers" : 16 }, "load" : { "job_types" : [ { "avg_time" : 20256, "job_type" : "ledgerData", "peak_time" : 66766, "per_second" : 5 }, { "in_progress" : 1, "job_type" : "clientCommand" }, { "in_progress" : 1, "job_type" : "trustedValidation" }, { "in_progress" : 1, "job_type" : "writeObjects", "per_second" : 1 }, { "in_progress" : 1, "job_type" : "acceptLedger" }, { "in_progress" : 1, "job_type" : "sweep" }, { "avg_time" : 6828, "job_type" : "peerCommand", "over_target" : true, "peak_time" : 27309, "per_second" : 1 }, { "avg_time" : 6, "job_type" : "SyncReadNode", "peak_time" : 52, "per_second" : 1 }, { "avg_time" : 1, "job_type" : "AsyncReadNode", "peak_time" : 77, "per_second" : 513 }, { "job_type" : "WriteNode", "per_second" : 5 } ], "threads" : 6 }, "load_factor" : 1, "network_ledger" : "waiting", "peer_disconnects" : "156", "peer_disconnects_resources" : "0", "peers" : 15, "pubkey_node" : "n94ELz6n7dcEp2DfdPZzRBJ147GoUaxBTYyM7qKRRE6F1CNQib2k", "pubkey_validator" : "none", "published_ledger" : "none", "server_state" : "connected", "state_accounting" : { "connected" : { "duration_us" : "5653763069", "transitions" : 1 }, "disconnected" : { "duration_us" : "1215652", "transitions" : 1 }, "full" : { "duration_us" : "0", "transitions" : 0 }, "syncing" : { "duration_us" : "0", "transitions" : 0 }, "tracking" : { "duration_us" : "0", "transitions" : 0 } }, "uptime" : 5655, "validation_quorum" : 15, "validator_list_expires" : "2018-Mar-25 00:00:00" }, "status" : "success" } }

miguelportilla commented 6 years ago

complete_ledgers and network_ledger are updated after the server has synchronized with the network. Looking at your server_info, your server never achieved synchronization as full duration is 0. This can happen for a number of reasons but it seems that your server disk IO is not keeping up. I highly recommend using an SSD drive. A node_size of huge requires at least 16GB of ram, preferably 32GB. I recommend lowering this to small otherwise your server may be swapping which would affect disk IO performance.

jokeryg commented 6 years ago

I set "node_size" to "small","complete_ledgers" : "37255573-37255965"。 thanks.

But what does the "node_size" option affect the data?

r0bertz commented 6 years ago

https://wiki.ripple.com/Rippled_Troubleshooting#.5Bnode_size.5D

At this time, if you have performance issue, I would recommend just upgrading hardware. This is the easiest way to fix. See this for hardware recommendations: https://ripple.com/build/rippled-setup/#network-and-hardware

passionofvc commented 6 years ago

hi @r0bertz @miguelportilla

according to https://ripple.com/build/rippled-setup/#network-and-hardware rippled0.90 need 32GB, why rippled hungry for memory? I think 32GB is very large even in production env if i wanna to run p2p node.

Operating System: Ubuntu 16.04+
CPU: Intel Xeon 3+ GHz processor with 4 cores and hyperthreading enabled
Disk: SSD
RAM:
For testing: 8GB+
For production: 32GB
Network: Enterprise data center network with a gigabit network interface on the host
miguelportilla commented 6 years ago

@passionofvc That recommendation is for a production system that expects heavy client use. A node that expects little client use can get away with 8GB if the node_size is small.

bachase commented 6 years ago

I don't see anything actionable on this issue for now. Please re-open a new issue as needed.

ddsalt commented 5 years ago

I'm running a huge node with 10,000 iops and after 2 weeks of running just fine, the system rebooted after patching. Now, it can't seem to catch up. The RAM and CPU on the box aren't being taxed. Network in is a fairly steady 20,000,000 bytes/5 min. I have a node that's 8GB that's running ok. I feel like its more of a peer issue than anything. Is there a way to dump the current peers and look for new ones? Is there a way to test disk i/o to see if its meeting the needs of the system? What does healthy look like? Thanks, Dave