Closed harshitchaudhary closed 5 years ago
Yes, you should look to increate the number of files that can concurrently be opened using ulimit. There may be a tradeoff in terms of memory that will be utilized once you've done this. This provides some general background on changing that setting and it's impact: https://serverfault.com/questions/48717/practical-maximum-open-file-descriptors-ulimit-n-for-a-high-volume-system
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically closed because it has not had recent activity. Please reopen if this issue is still important to you. Thank you for your contributions.
I'm encountering this issue while testing bulk ingest via python influxdb_client from a SQL database writing ~300 lines per call every ~2-3s (2.0.0 rc)
A restart solved the issue. Is there a way to prevent this happening influx OSS?
I was trying to load test the php-lumen framework with influxDB. but anything above 128 concurrent php-curl requests (in a loop) makes the db go mad. I think 128 concurrent is not a big number. I tried similar load on databases before. They seems to work just fine.
Cofiguration:
Web Server: Nginx Framework: Lumen Server Language: PHP OS: High Sierra Processor: 2.5 GHz Intel Core i5 RAM: 12 GB 1600 MHz DDR3
Shell Output
Command to open influxdb
influxd -config /usr/local/etc/influxdb.conf
Output of ULimit:
ulimit -a
Any clue? Whats going wrong in here. I just feel I need to increase the open files limit for OS. But if yes then is it a good practice?