Open torkelrogstad opened 1 year ago
Managed to get around this by increasing the file limit with ulimit -n LARGE_NUMBER
. However I've never had software crash on my machine before due to this. I think it would be reasonable for electrs
to try and stay within the allotted limit.
Thanks for reporting this issue! Will reproduce on a Linux machine to collect file descriptor usage metrics.
Additional context I tried building this with Prometheus support in order to inspect the file descriptor progress, but got this result:
❯ cargo build --locked --release --all-features
Compiling electrs v0.10.0 (/Users/torkel/dev/rust/electrs)
error[E0432]: unresolved import `prometheus::process_collector`
--> src/metrics.rs:6:21
|
6 | use prometheus::process_collector::ProcessCollector;
| ^^^^^^^^^^^^^^^^^ could not find `process_collector` in `prometheus`
Unfortunately, process_collector
doesn't support MacOS :(
https://github.com/tikv/rust-prometheus/blob/6e81890773ef82e3bcc6c080d406543da1fb8073/src/process_collector.rs#L5
According to Prometheus, electrs uses ~350 file descriptors during compaction:
However I've never had software crash on my machine before due to this.
I assume that the reason behind this is that the initial sync process writes many SST files (containing the index) and then merges them during the full compaction. In order to merge them, RocksDB needs to open all of them - failing if there are more than 256 SST files to be merged.
This issue is expected since the blockchain is growing, and so is the number of files to merge during the full compaction.
We'll probably need to update the docs to suggest increasing ulimit -n
to a larger value (on my machine it is set to 1024).
Right, interesting, thanks! Probably a good idea to update the docs, if it isn't possible to get RocksDB to be stingier with the files. For reference, my ulimit -n
is 256, which I believe is the default on macOS
Describe the bug Block compaction panics due to too many files open.
Electrs version 0.10.0, built from master.
To Reproduce Steps to reproduce the behavior:
cargo build --locked --release
./target/release/electrs --log-filters=DEBUG --skip-block-download-wait
thread 'main' panicked at 'DB::put failed: Error { message: "IO error: While open a file for random read: ./db/bitcoin/000213.sst: Too many open files" }', src/db.rs:342:14
Expected behavior I'd expect block compaction to finish
Configuration I don't use a configuration file for this.
Environment variables:
ELECTRS_X=Y;...
: none Arguments:--foo
:--log-filters=DEBUG --skip-block-download-wait
System running electrs
Additional context I tried building this with Prometheus support in order to inspect the file descriptor progress, but got this result:
ulimit -n
reports 256, but I'm not sure if this is the correct number for max open files.Posting logs.