Closed K2559 closed 4 months ago
Hi! Yes, parallelization is available. You will find the settings in your settings.yaml
file, where you can set the parallelization.num_threads
settings to define how many processes you want to run in parallel.
@K2559 we do not currently support the use of multiple API endpoints at this time. Only running parallel threads against the same API endpoint.
I assume you are referring to a load balanced setup similar to the diagrams here except without the API Management component? This scenario has not been tested but if you do have a load-balanced setup similar to what is depicted in those diagrams, I cannot think of any restriction in our codebase that would prevent the use of multiple OpenAI models.
hi @jgbradley1 ,
How is that can be done pls? this would help accelerating the indexing process
Hi! Yes, parallelization is available. You will find the settings in your
settings.yaml
file, where you can set theparallelization.num_threads
settings to define how many processes you want to run in parallel.
I changed the num_threads in the settings.yaml to 20, but the process still seems pretty slow:
Is there anyway to speed up the indexing process?
Describe the issue
A single api or model can provide just around 100 token/sec max, still too slow if indexing a lot of files. Is it possible we use multiple api to have parallel indexing to speed up the processing?
Steps to reproduce
No response
GraphRAG Config Used
No response
Logs and screenshots
No response
Additional Information