Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
27.45k stars 2.76k forks source link

feat: support setting maxConcurrentChunks for Generic OpenAI embedder #2655

Closed hdelossantos closed 10 hours ago

hdelossantos commented 2 days ago

Pull Request Type

Relevant Issues

resolves #2654

What is in this change?

Adds an advanced settings collapsible menu to the Generic OpenAI embedder configuration with input to optionally set max concurrent chunks. If no input is provided, the maxConcurrentChunks variable is defaulted to 500.

image

Additional Information

Developer Validations

timothycarambat commented 10 hours ago

Excellent work, just made a minor update to make the maxEmbeddingChunks a class getter so make sure the value in the ENV is parsed to number and always something we can work with.

Also updated field UI to the new UI so it looks right.