camel-ai / camel

šŸ« CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
https://www.camel-ai.org
Apache License 2.0
5.39k stars 660 forks source link

[Feature Request] Add support to Anthropic LLM #283

Closed ocss884 closed 5 months ago

ocss884 commented 1 year ago

Required prerequisites

Motivation

The Claude series from Anthropic company is one of the most popular LLMs and they are serious competitors of GPTs. They support a 100,000 tokens context window while still free for personal API usage.

Solution

Add Claude-2 and Claude-instant-1 as backend models

Alternatives

No response

Additional context

No response

lightaime commented 1 year ago

Hi @ocss884. Sounds great. Please feel free to open a PR. Although I do not have access to Claude yet. Here is also a related discussion: https://github.com/camel-ai/camel/pull/271.

ocss884 commented 1 year ago

@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:

 env:
   pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
  OPENAI_API_KEY: 

I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.

krrishdholakia commented 1 year ago

Hey @lightaime,

If you're integrating via litellm, here's an easy way to test if the anthropic integration is working: https://docs.litellm.ai/docs/proxy_api#step-2-test-a-new-llm

ishaan-jaff commented 1 year ago

Hi @lightaime @ocss884 I believe we can help with this issue. Iā€™m the maintainer of LiteLLM https://github.com/BerriAI/litellm - we allow you to use any LLM as a drop in replacement for gpt-3.5-turbo.

You can use LiteLLM in the following ways:

With your own API KEY:

This calls the provider API directly

from litellm import completion
import os
## set ENV variables 
os.environ["OPENAI_API_KEY"] = "your-key" # 
os.environ["COHERE_API_KEY"] = "your-key" # 

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)

Using the LiteLLM Proxy with a LiteLLM Key

this is great if you donā€™t have access to claude but want to use the open source LiteLLM proxy to access claude

from litellm import completion
import os

## set ENV variables 
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key

messages = [{ "content": "Hello, how are you?","role": "user"}]

# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)

# cohere call
response = completion(model="command-nightly", messages=messages)
Obs01ete commented 12 months ago

Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.

ocss884 commented 12 months ago

Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.

Hi @Obs01ete , Claude series are LLM from a company called Anthropic, which support 100,000 token context windows. They could be another great model backend choice for role-playing agents. I have added more details in this issue

@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:

 env:
   pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
  LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
  OPENAI_API_KEY: 

I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.

Could you help to check the OPENAI_API_KEY setup? Due to the dangers inherent to automatic processing of PRs, GitHubā€™s standard pull_request workflow trigger by default prevents write permissions and secrets access to the target repository. I think all PRs cannot pass some of the tests due to the lack of OPENAI_API_KEY in the environment.