Open damies13 opened 1 year ago
Limit on open files is 1024,
root@ProfilesData:~# ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 31051
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 31051
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Comments from code:
def _getshardid(self, doc_id):
# shard_id = doc_id.split("-")[0]
# I have been comtemplating how big to make the shard_id
# it's a compromise between having too many files or too big files.
# Initially I was going to make the the first 2 char of the uuid
# but then I made it the first part (8 char) which might make too many files?
# Further thought 16 x 16 (2 char) = 256 files, this was the windows limit at
# one stage. might still be?
# Found the answer, FAT16 limit is 512 files in a folder and a 3 char shard
# 16 x 16 x 16 = 4096 would well exceed this, but would be fine on FAT32 or NTFS
# https://stackoverflow.com/questions/4944709/windows-limit-on-the-number-of-files-in-a-particular-folder#14407078
shard_id = doc_id[0:2]
return shard_id
we need 256 shard files and potentially as many again lock files, which is 1/2 of the 1024 OS limit
Probably need to increase this somehow, can python do this or it needs to be done at the os level?
Lock file not being closed?