remove the intermediate get_llm_response utility functions to simplify and shorten code, including unused parallel_ codepath and relegating non-instructor codepath to legacy self.get_llm_response function only used in agent.learn().
Also reimplemented checking for valid API keys upon LiteLLM runtime startup, which was not working
future directions and qs:
get rid of LiteLLMInferenceSettings? Maybe replace with the internal LiteLLM type containing all the same settings, but with overridden defaults. Would like to move inference settings (including temperature etc) into a separate dict within the runtime for clarity, but this'd break backwards compat, so no.
Bring back LLMErrorResponse model? it's not useful now because it'd get immediately coerced into a dataframe row or a dict anyway, but hoping to build towards an alternate inference path from the server which doesn't convert to and from dataframes, so we can use pydantic models defined in the rest of the adala lib directly in ResultHandlers.
Set up a response model for agent.learn(), to fully sunset unconstrained generation in litellm. Unless this'd require converting all other runtimes to structured gen too...
remove the intermediate
get_llm_response
utility functions to simplify and shorten code, including unusedparallel_
codepath and relegating non-instructor codepath to legacyself.get_llm_response
function only used inagent.learn()
.Also reimplemented checking for valid API keys upon LiteLLM runtime startup, which was not working
future directions and qs:
agent.learn()
, to fully sunset unconstrained generation in litellm. Unless this'd require converting all other runtimes to structured gen too...