nomic-ai / gpt4all

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
https://nomic.ai/gpt4all
MIT License
70.65k stars 7.7k forks source link

Issue: Performance improvement oppertunity in the documents processing #1248

Open MSZ-MGS opened 1 year ago

MSZ-MGS commented 1 year ago

Issue you'd like to raise.

Whenever I change the advanced settings of the LocalDocs e.g. number of the document snippets per prompt or snippet size, the G4A got freeze for several minutes. I think the G4A is reprocessing all documents again. Anyhow, when I checked the TaskManager, I noticed that the processing load is only on one of the CPU cores as shown below: image

Suggestion:

If it is possible, multi-thread the load over the number of cores available.

owenpmckenna commented 1 year ago

I think this is already possible using the n_threads parameter when initializing the GPT4All model. Usage and tuning is shown in this issue (they are technically using GPT4All through Langchain but it works the same directly see here line 70). I'll admit I'm not familiar with LocalDocs but you should find a n_threads or similar parameter there.