Codium-ai / pr-agent

🚀CodiumAI PR-Agent: An AI-Powered 🤖 Tool for Automated Pull Request Analysis, Feedback, Suggestions and More! 💻🔍
Apache License 2.0
5.98k stars 577 forks source link

Can we use pr-agent with custom llm provider? #868

Closed ElonaZharri closed 6 months ago

ElonaZharri commented 6 months ago

Is this feature available yet?

mrT23 commented 6 months ago

yes

https://pr-agent-docs.codium.ai/usage-guide/additional_configurations/#changing-a-model

mrT23 commented 6 months ago

In addition, in the near future, PR-Agent Pro would offer seamless support of claude3 along with GPT-4

ElonaZharri commented 6 months ago

I have llama2 running in my local, and I have made the necessary adjustments in .secrets.toml, configuration.toml and __init__.py, as specified here:

[__init__.py]
MAX_TOKENS={
    "ollama/llama2": 4096,
    ...,
}

[config] # in configuration.toml
model = "ollama/llama2"
model_turbo = "ollama/llama2"

[ollama] # in .secrets.toml
api_base = "http://localhost:11434/"

And this is the output I keep getting:

2024-04-17 16:11:29.179 | ERROR    | pr_agent.algo.utils:load_yaml:437 - Failed to parse AI prediction: mapping values are not allowed here
  in "<unicode string>", line 8, column 180:
     ... on in your `.pr_agent.toml` file:
                                         ^
2024-04-17 16:11:29.181 | INFO     | pr_agent.algo.utils:try_fix_yaml:459 - Failed to parse AI prediction after adding |-

2024-04-17 16:11:29.241 | INFO     | pr_agent.algo.utils:try_fix_yaml:488 - Successfully parsed AI prediction after removing 57 lines
2024-04-17 16:11:29.242 | ERROR    | pr_agent.tools.pr_reviewer:run:150 - Failed to review PR: 'str' object has no attribute 'get'
mrT23 commented 6 months ago

as the error message says, remove the '...', its a placeholder, not a real configuration

in addition, llama2 is a weak model for code, so your results probably wont be good

image

ElonaZharri commented 6 months ago

Thank you for your feedback. I did not have ... in __init__.py Attached are a few screenshots of the setup:

Screen Shot 2024-04-18 at 9 23 59 AM Screen Shot 2024-04-18 at 9 24 12 AM

Run again this morning:

Screen Shot 2024-04-18 at 9 34 08 AM

I agree, llama2 is weak, and I was wondering if I can use a custom llm provider, and use their API KEY token, instead of OpenAI.

barnett-yuxiang commented 6 months ago

pr-agent-docs.codium.ai/usage-guide/additional_configurations/#changing-a-model

mrT23 commented 6 months ago

closing this issues, as it was discussed and solve in the Discord channel