Open mathieuisabel opened 4 months ago
Hi! Do you have a specific use case in mind? We don't currently have a LangChain integration but I assume RouteLLM could be used in-place of model calls, where instead of just using GPT-4, you would have a router that routed between two models and returns the response.
I would think more as a drop-in replacement for the models. i.e. right now you can set llm as either ChatOpenAI or ChatAnthropic and you don't have to change the definition of the chain itself. i.e.
llm = ChatOpenAI( model_name=model_name, temperature=0.0, openai_api_key=os.getenv('OpenAI__ApiKey'), max_retries=3, request_timeout=240 )
or
llm = ChatAnthropic( model_name=model_name, temperature=0.0, api_key=os.getenv('Anthropic__ApiKey'), max_retries=3, timeout=10 )
and then:
chain = prompt | llm.bind_tools([structure_segment_function]) response=chain.invoke(chain_payload)
Yes exactly, a drop-in replacement for models is what I had in mind. We'd be happy to accept any contributions here!
How can RouteLLM be used with LangChain?