Open vkameswaran opened 7 months ago
To implement function calling support for models like Cohere, Anthropic, and Fireworks, you can follow these steps, inspired by the existing codebase structure and the handling of tools and datasources in agents:
Extend the LangchainAgent Class:
LangchainAgent
class in /libs/superagent/app/agents/langchain.py
to include a method for function calling. This method should be capable of handling the specific API calls or SDK methods required by models that support function calling, such as Cohere, Anthropic, and Fireworks.Define Function Calling Specifications:
LangchainAgent
class that defines how function calls are made. This method should take into account the different specifications and capabilities of each model. For example, the method signature could look like this:
async def call_function(self, function_name: str, **kwargs):
# Implementation for calling a function on the LLM
Integrate with LLM Providers:
_get_llm
method of the LangchainAgent
class, add support for new LLM providers that support function calling. This might involve integrating with their SDKs or APIs and handling authentication and request formatting according to each provider's requirements.Utilize Existing Tool Integration Logic:
chat_with_tools.mdx
file. This involves creating and attaching tools (in this case, function calling capabilities) to an agent and then invoking these tools based on user input or agent logic.Update Agent Configuration:
AgentConfig
model) supports specifying whether a model supports function calling and any related configurations. This could include function names, parameters, and provider-specific options.Documentation and Examples:
/fern/mdx/concepts.mdx
and /fern/mdx/sdk/chat_with_tools.mdx
to include examples and explanations of how to use function calling with supported models. This will help users understand how to leverage this new feature in their agents.Testing and Validation:
By following these steps and utilizing the existing structure for tools and datasources, you can effectively integrate function calling support for models that offer this capability, enhancing the flexibility and power of the agents.
To integrate function calling support for models like LiteLLM, Cohere, Anthropic, and Fireworks, follow these steps:
Extend LLM Mapping:
Update libs/superagent/app/utils/llm.py
to include mappings for the new models, ensuring that each model's specific API requirements and capabilities are accounted for.
Modify LangchainAgent:
In libs/superagent/app/agents/langchain.py
, adapt the _get_llm
method to instantiate the correct LLM client based on the provider. This may involve creating new client classes similar to ChatOpenAI
and AzureChatOpenAI
for each new model, handling their unique function calling specifications.
Tool Integration:
Ensure that the tool creation and invocation processes in /fern/mdx/sdk/local_tools.mdx
and libs/superagent/app/agents/langchain.py
can accommodate the function calling syntax and parameters required by the new models. This might include extending the create_tool
function and the tool dispatch mechanism to handle new types of function calls.
Update Agent Configuration: Allow agent configurations to specify which LLM model to use, including those that support function calling. This involves updating the agent creation scripts and possibly the agent configuration UI to allow users to select and configure function-calling models.
Documentation and Examples:
Update the documentation in /fern/mdx/sdk/local_tools.mdx
to include examples and guidance on using the new function calling models. Provide clear examples of how to configure agents to use these models and how to structure function calls.
These steps require a deep understanding of both the Superagent framework and the specific capabilities and requirements of the models being integrated. It's crucial to thoroughly test the integration with each new model to ensure compatibility and performance.
greptileai/superagent/fern/mdx/sdk/local_tools.mdx greptileai/superagent/libs/superagent/app/agents/langchain.py greptileai/superagent/libs/superagent/app/utils
To implement function calling support for models like Cohere, Anthropic, and Fireworks using the LiteLLM function calling spec, you'll need to make several updates across the codebase. Here's a concise implementation plan:
Update the LLMProvider Enum: Modify the LLMProvider
enum in the migration scripts to ensure it supports all required models. For Anthropic, this has been done in the 20240306005619_add_anthropic
migration. Similar updates may be needed for Cohere and Fireworks.
Extend the Agent API: In /libs/superagent/app/api/agents.py
, enhance the invoke
method to handle function calls. This involves parsing the request body for function call specifications and adjusting the invocation logic to accommodate these.
Modify the Assistant Classes: Within the same file, update the Assistant
and OpenAIAssistantSdk
classes (or create equivalent classes for other models) to support function calling. This includes adding methods that specifically handle the function call logic.
Update Callback Handlers: In /libs/superagent/app/utils/callbacks.py
, ensure that the callback handlers can process responses from function calls. This might involve adding new methods or modifying existing ones in CustomAsyncIteratorCallbackHandler
and CostCalcAsyncHandler
to handle the specifics of function call responses.
Documentation: Update the SDK documentation in /fern/mdx/sdk
to include examples and guidance on making function calls using the updated API.
This plan targets the unique aspects of adding function calling support, leveraging the existing structure of the SuperAgent codebase.
greptileai/superagent/fern/mdx/sdk greptileai/superagent/libs/superagent/app/api/agents.py greptileai/superagent/libs/superagent/prisma/migrations/20240306005619_add_anthropic greptileai/superagent/libs/superagent/app/utils/callbacks.py
To add function calling support for models that support it, follow these steps:
LLMProvider
enum in prisma/schema.prisma
to ensure all relevant models (Cohere, Anthropic, Fireworks) are included. Refer to migrations like /libs/superagent/prisma/migrations/20240420075553_add_cohere
for examples of adding new providers.app/api/agents.py
, extend the invoke
function to handle function calling based on the LLM model. Use the model's capabilities to determine if function calling is supported.LangchainAgent
class in /libs/superagent/app/agents/langchain.py
to include logic for function calling, leveraging the LiteLLM function calling spec.20240420075553_add_cohere
) are applied to support the new models./libs/superagent/app/api/agents.py /libs/superagent/app/agents/langchain.py /libs/superagent/prisma/migrations/20240306005619_add_anthropic /libs/superagent/prisma/migrations/20240420075553_add_cohere /libs/superagent/prisma/migrations/20240418181431_add_mistral /libs/superagent/prisma/migrations/20240418183001_add_groq /libs/superagent/prisma/migrations/20240412075016_add_aws_bedrock /libs/superagent/prisma/migrations/20240213050512_llm_agent /libs/superagent/prisma/migrations/20240201161130_add_gpt_4_turbo_preview /libs/superagent/prisma/migrations/20231106194639_gpt_4_1106_preview
To implement function calling support for models like Cohere, Anthropic, Fireworks, etc., using the LiteLLM function calling spec, follow these steps:
Update the Database Schema: Ensure the database schema can accommodate function calling capabilities. This might involve adding new fields to existing tables or creating new tables. Refer to the migration scripts in /libs/superagent/prisma/migrations/
for guidance on how to add new values to enums or alter tables.
Modify the LangchainAgent Class: In /libs/superagent/app/agents/langchain.py
, extend the LangchainAgent
class to include a method for function calling. This method should be capable of determining if the selected model supports function calling and, if so, use the LiteLLM function calling spec.
Integrate LiteLLM Spec: Implement the LiteLLM function calling specification within the new method. This involves constructing the function call request according to the LiteLLM spec and handling the response.
Update LLM Provider Enum: If necessary, update the LLMProvider
enum to include any new providers that support function calling but are not yet listed. This can be done by adding migration scripts similar to those found in /libs/superagent/prisma/migrations/
.
Test Function Calling: Ensure thorough testing with each LLM provider that supports function calling to verify that the integration works as expected.
This approach leverages the existing infrastructure while introducing the necessary enhancements to support function calling across different models.
greptileai/superagent/libs/superagent/app/agents/langchain.py greptileai/superagent/libs/superagent/prisma/migrations/20240306005619_add_anthropic greptileai/superagent/libs/superagent/prisma/migrations/20240420075553_add_cohere greptileai/superagent/libs/superagent/prisma/migrations/20240418181431_add_mistral greptileai/superagent/libs/superagent/prisma/migrations/20240418183001_add_groq greptileai/superagent/libs/superagent/prisma/migrations/20240412075016_add_aws_bedrock greptileai/superagent/libs/superagent/prisma/migrations/20240311200421_tavily_tool greptileai/superagent/libs/superagent/prisma/migrations/20240213050512_llm_agent greptileai/superagent/libs/superagent/prisma/migrations/20240213051312_add_perplexity greptileai/superagent/libs/superagent/prisma/migrations/20240213203005_add_togetherai
⚠️ Please check that this feature request hasn't been suggested before.
🔖 Feature description
I want us to use the LiteLLM function calling spec for instead of our hybrid GPT approach for models that support function calling. Cohere, Anthropic, Fireworks etc. etc.
Acknowledgements