Open sunxin000 opened 4 days ago
It looks like you're trying to run all servers on one machine. That would require about 1TB of memory for the index and another 1TB for the data. We ran 30 servers on 30 different machines, each one serving one chunk of the data. The code supports running everything in a distributed manner.
Description:
I am running a script to start multiple server instances as shown below:
However, I have encountered an issue where some processes are being killed by the OS. I suspect this is due to Out of Memory (OOM) conditions because each server instance loads the portion of data and index it is responsible for into memory. Given that the data and index are quite large, this results in high memory usage.
Question:
How can I address this problem to prevent the processes from being killed due to high memory usage? Are there any strategies or optimizations I can apply to reduce the memory footprint of each server instance?