Discovered while poking around documentation -- the .configurable_alternatives() method may be useful to permit the user to leverage different LLM's based on preference/configuration. Would replace current method in the codebase / might be cleaner. Code example from weblangchain repo:
if has_google_creds:
llm = ChatOpenAI(
model="gpt-3.5-turbo-16k",
# model="gpt-4",
streaming=True,
temperature=0.1,
).configurable_alternatives(
# This gives this field an id
# When configuring the end runnable, we can then use this id to configure this field
ConfigurableField(id="llm"),
default_key="openai",
anthropic=ChatAnthropic(
model="claude-2",
max_tokens=16384,
temperature=0.1,
anthropic_api_key=os.environ.get("ANTHROPIC_API_KEY", "not_provided"),
),
googlevertex=ChatVertexAI(
model_name="chat-bison-32k",
temperature=0.1,
max_output_tokens=8192,
stream=True,
),
)
Discovered while poking around documentation -- the .configurable_alternatives() method may be useful to permit the user to leverage different LLM's based on preference/configuration. Would replace current method in the codebase / might be cleaner. Code example from weblangchain repo: