Closed ocss884 closed 5 months ago
Hi @ocss884. Sounds great. Please feel free to open a PR. Although I do not have access to Claude yet. Here is also a related discussion: https://github.com/camel-ai/camel/pull/271.
@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:
env:
pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64
LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib
OPENAI_API_KEY:
I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.
Hey @lightaime,
If you're integrating via litellm, here's an easy way to test if the anthropic integration is working: https://docs.litellm.ai/docs/proxy_api#step-2-test-a-new-llm
Hi @lightaime @ocss884 I believe we can help with this issue. Iām the maintainer of LiteLLM https://github.com/BerriAI/litellm - we allow you to use any LLM as a drop in replacement for gpt-3.5-turbo
.
You can use LiteLLM in the following ways:
This calls the provider API directly
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
os.environ["COHERE_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
this is great if you donāt have access to claude but want to use the open source LiteLLM proxy to access claude
from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages)
Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.
Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket.
Hi @Obs01ete , Claude series are LLM from a company called Anthropic, which support 100,000 token context windows. They could be another great model backend choice for role-playing agents. I have added more details in this issue
@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:
env: pythonLocation: /opt/hostedtoolcache/Python/3.8.18/x64 LD_LIBRARY_PATH: /opt/hostedtoolcache/Python/3.8.18/x64/lib OPENAI_API_KEY:
I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key.
Could you help to check the OPENAI_API_KEY setup? Due to the dangers inherent to automatic processing of PRs, GitHubās standard pull_request workflow
trigger by default prevents write permissions and secrets access to the target repository. I think all PRs cannot pass some of the tests due to the lack of OPENAI_API_KEY in the environment.
Required prerequisites
Motivation
The Claude series from Anthropic company is one of the most popular LLMs and they are serious competitors of GPTs. They support a 100,000 tokens context window while still free for personal API usage.
Solution
Add Claude-2 and Claude-instant-1 as backend models
Alternatives
No response
Additional context
No response