:mag: AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.
Problem Statement
As an NLP developer and model deployer, I want more choices regarding model management and inferencing (e.g. using remotely or locally deployed models), improving scalability, flexibility, cost-effectiveness, and performance.
User Tasks
Transparently switch from local runtime model invocation to remote invocation
Use multiple backends
Remote models hosted by other providers (HuggingFace/OpenAI etc)
Problem Statement As an NLP developer and model deployer, I want more choices regarding model management and inferencing (e.g. using remotely or locally deployed models), improving scalability, flexibility, cost-effectiveness, and performance.
User Tasks