Open lyz-code opened 1 year ago
Hey @lyz-code , thanks for reporting this. Are you overriding the WORKER_THREADS
environment variable by any chance? It defaults to the CPU core count. Would you be able to set WORKER_THREADS=0
and see how that goes?
Hey @stchris, nope I didn't set the WORKER_THREADS
, so it defaulted to 4 (number of CPUs as you said).
I've tried running the index with WORKER_THREADS=0
, it "solved the issue" in a way that as it only uses one CPU thread, there are no context switches at all xD.
My idea of opening the issue was to try to keep on using all the CPUs in a better way that didn't do that many context switches.
As I understand there might be more pressing issues I'm fine with closing the issue if you want to
Thanks for confirming this @lyz-code . I mentioned this workaround because setting worker_threads to 0 allows scaling the number of processes. Would that be an option for you?
I'm not against investigating this issue, but it's not a trivial one. And in my experience the behavior of ingest-file strongly depends on the kind of load you put it through (number and type of ingested files). So I would need a few more steps to reproduce this. And yes, I believe it would be quite low on our priority list as it stands.
We're ingesting some files and we're getting an alert in our monitorization system regarding a high number of context switching from the ingestors processes.
I know it's a hard issue to deal with, but do you think that there could be an improvement on the ingest process to improve the performance by reducing the number of switches?
Thanks