Frequently seeing "too many open files" messages after I kick started indexing for the 3rd bucket on a single node CB cluster using "golevelDB" as kvstore. Sample error:
2015/08/06 02:17:03 feed_dcp: OnError, name: phonehome_4038be6b2157233c: bucketName: phonehome, bucketUUID: , err: error: DataChange, err: feed_dcp: DataUpdate, name: phonehome_4038be6b2157233c, partition: 366, key: stats_5638727749599232, seq: 3112, err: open data/phonehome_4038be6b2157233c_d44c5f96.pindex/store/000018.ldb: too many open files
By default, FD limit is set to 1024
➜ cbft ulimit -n
1024
For now flipped maxPartitionsPerPIndex to 200 to mitigate this.
Frequently seeing "too many open files" messages after I kick started indexing for the 3rd bucket on a single node CB cluster using "golevelDB" as kvstore. Sample error:
By default, FD limit is set to 1024
For now flipped
maxPartitionsPerPIndex
to 200 to mitigate this.