Closed braydonf closed 3 years ago
Hi, could you check that your operating system open files limit is set to high enough - at least 20000, better a lot more. Symlinked directory should not cause you any problems, we use it in our setup too.
$ cat /proc/sys/fs/file-max
6564351
$ ulimit -Hn
1048576
$ ulimit -Sn
1024
$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 257024
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 257024
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Open files 1024 is really low, you must increase it.
Okay, increased it and attempting again. It may be worth noting this in https://github.com/trezor/blockbook/pull/474.
Hi, is the issue resolved?
I was able to sync with the system modification, the default was 1024, I changed it to 20000 for the initial sync . Only thing would be to either: document the system adjustment or relax requirement for maximum open files.
I had left the open files limit to the default value (1024) after the initial sync and ran into an error, "database is in inconsistent state and cannot be used":
$:~/go/src/blockbook$ ./blockbook -sync -blockchaincfg=build/blockchaincfg.json -internal=:9030 -public=:9130 -certfile=server/testcert -logtostderr
I1005 14:15:30.458238 54464 blockbook.go:136] Blockbook: {Version:unknown GitCommit:unknown BuildTime:unknown GoVersion:go1.14.2 OSArch:linux/amd64}, debug mo
de false
I1005 14:15:30.459317 54464 bitcoinrpc.go:136] rpc: block chain mainnet
I1005 14:15:30.459330 54464 mempool_bitcoin_type.go:56] mempool: starting with 8*2 sync workers
I1005 14:15:30.459337 54464 rocksdb.go:148] rocksdb: opening ./data, required data version 5, cache size 536870912, max open files 16384
I1005 14:15:34.869575 54464 rocksdb.go:1696] loaded 647361 block times
I1005 14:15:34.870966 54464 worker.go:1781] GetSystemInfo finished in 895.437µs
I1005 14:15:34.871106 54464 internal.go:68] internal server: starting to listen on https://:9030
I1005 14:15:34.876702 54464 public.go:122] public server starting to listen on https://:9130
I1005 14:15:34.877165 54464 sync.go:111] resync: local at 647360 is behind
I1005 14:15:34.878519 54464 sync.go:134] resync: parallel sync of blocks 647361-651423, using 8 workers
I1005 14:15:34.878690 54464 bulkconnect.go:56] rocksdb: bulk connect init, db set to inconsistent state
F1005 14:15:42.156476 54464 sync.go:275] writeBlockWorker 647366 000000000000000000059f7f4cf8bbe76041df3daef7d66ad99a344ac9a3612b error IO error: While open a
file for random read: ./data/152375.sst: Too many open files
$:~/go/src/blockbook$ ulimit -Sn
1024
$:~/go/src/blockbook$ ulimit -Sn
20000
$:~/go/src/blockbook$ ./blockbook -sync -blockchaincfg=build/blockchaincfg.json -internal=:9030 -public=:9130 -certfile=server/testcert -logtostderr
I1005 14:16:38.554670 54534 blockbook.go:136] Blockbook: {Version:unknown GitCommit:unknown BuildTime:unknown GoVersion:go1.14.2 OSArch:linux/amd64}, debug m$de false
I1005 14:16:38.555990 54534 bitcoinrpc.go:136] rpc: block chain mainnet
I1005 14:16:38.556005 54534 mempool_bitcoin_type.go:56] mempool: starting with 8*2 sync workers
I1005 14:16:38.556012 54534 rocksdb.go:148] rocksdb: opening ./data, required data version 5, cache size 536870912, max open files 16384
I1005 14:16:39.266498 54534 rocksdb.go:1696] loaded 647361 block times
E1005 14:16:39.267289 54534 blockbook.go:210] internalState: database is in inconsistent state and cannot be used
I1005 14:16:39.267298 54534 rocksdb.go:287] rocksdb: close
You have to leave the hight limit even after the initial sync.
The data are stored in multiple files on the disk. When you start syncing, the number of files slowly grows until it hits the limit (if it is too small). However, after you finish the sync, the files are still there, containing data, you cannot decrease the limit.
Unfortunately, the database is now corrupted, you have to start again (if you do not have a backup from before it became corrupted), with high enough limit.
Sorry.
I encountered this error message while running the initial synchronization:
After restarting again, it now given the error:
And it appears I need to restart the synchronization again from the beginning.
I started blockbook using:
I symlinked the data directory to a drive with more space after running out of disk space the first run.
Is there a configuration option to change the max open files or another way to resolve this issue?