camel-ai / camel

🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
https://www.camel-ai.org
Apache License 2.0
5.38k stars 660 forks source link

[Feature Request] Onboarding agent doc. Add guidance on how to use the OpenAI compatible api. #988

Open LuciusMos opened 4 days ago

LuciusMos commented 4 days ago

Required prerequisites

Motivation

I am a new user of camel. When reading the agent-building doc for onboarding, I found there is no guidance on how I could use an OpenAI compatible api (In my case, I used deepseek api, which is quite popular for its strong performance and low price). And even worse, the current doc is a bit misleading. Here is how the doc says doc link

echo 'export OPENAI_API_KEY="your_api_key"' >> ~/.zshrc

# If you are using other proxy services like Azure
echo 'export OPENAI_API_BASE_URL="your_base_url"' >> ~/.zshrc # (Optional)

Because deepseek api uses api_key="<DeepSeek API Key>", base_url="https://api.deepseek.com", which is compatible with OpenAI api, so I used the command line above to set env variable. However, it does not work. I think the reason is that, for the ChatAgent code in the doc, we do not pass a model, so it would use the default one, which only permits OpenAI api rather than other compatible ones code link:

            else ModelFactory.create(
                model_platform=ModelPlatformType.OPENAI,
                model_type=ModelType.GPT_4O_MINI,
                model_config_dict=ChatGPTConfig().as_dict(),
            )

So I used OpenAICompatibilityModel to initialize a customized model and pass it into the ChatAgent and it worked, like the code below: (I stored the base url and key in .env file, and used load_dotenv() to load)

agent = ChatAgent(
    system_message=sys_msg,
    message_window_size=10,    # [Optional] the length for chat memory
    model=OpenAICompatibilityModel(
        model_type="deepseek-chat",
        model_config_dict={"max_tokens": 4096},
        api_key=os.getenv("OPENAI_API_KEY"),
        url=os.getenv("OPENAI_API_BASE_URL"),
    )
)

I think there would be many users who face similar confused situation like me. So I think we should modify the onboarding doc and give more detailed guidance on how to use OpenAI compatible apis.

Solution

  1. Add more api guide here: API Setup doc, including other apis besides OpenAI, OpenAI compatible api. Also for the local open-source model part, we can add vLLM here, which I found elsewhere in the doc. Or, we can link to the Model doc, and explain in detail there.
  2. Separate the API Setup part. So the "Get Started" would be like:
    - Get Started
    - Installation
        - ...
    - Model Setup
        - API
        - Local Model

Alternatives

No response

Additional context

No response

Wendong-Fan commented 3 days ago

Thanks @LuciusMos ! The 2nd solution your proposed looks better to me, we can make Installation and Setup part with better structured, I also think we can support environment variable passing for OpenAICompatibilityModel, like OPENAI_COMPATIBILIY_API_KEY and OPENAI_COMPATIBILIY_API_KEY, also mention this in doc for model , could you help with this?

LuciusMos commented 3 days ago

Thanks @LuciusMos ! The 2nd solution your proposed looks better to me, we can make Installation and Setup part with better structured, I also think we can support environment variable passing for OpenAICompatibilityModel, like OPENAI_COMPATIBILIY_API_KEY and OPENAI_COMPATIBILIY_API_KEY, also mention this in doc for model , could you help with this?

@Wendong-Fan Thanks for the feedback! Yeah of course I would work on this with a new PR