Closed ElonaZharri closed 6 months ago
In addition, in the near future, PR-Agent Pro would offer seamless support of claude3 along with GPT-4
I have llama2
running in my local, and I have made the necessary adjustments in .secrets.toml
, configuration.toml
and __init__.py
, as specified here:
[__init__.py]
MAX_TOKENS={
"ollama/llama2": 4096,
...,
}
[config] # in configuration.toml
model = "ollama/llama2"
model_turbo = "ollama/llama2"
[ollama] # in .secrets.toml
api_base = "http://localhost:11434/"
And this is the output I keep getting:
2024-04-17 16:11:29.179 | ERROR | pr_agent.algo.utils:load_yaml:437 - Failed to parse AI prediction: mapping values are not allowed here
in "<unicode string>", line 8, column 180:
... on in your `.pr_agent.toml` file:
^
2024-04-17 16:11:29.181 | INFO | pr_agent.algo.utils:try_fix_yaml:459 - Failed to parse AI prediction after adding |-
2024-04-17 16:11:29.241 | INFO | pr_agent.algo.utils:try_fix_yaml:488 - Successfully parsed AI prediction after removing 57 lines
2024-04-17 16:11:29.242 | ERROR | pr_agent.tools.pr_reviewer:run:150 - Failed to review PR: 'str' object has no attribute 'get'
as the error message says, remove the '...', its a placeholder, not a real configuration
in addition, llama2 is a weak model for code, so your results probably wont be good
Thank you for your feedback. I did not have ...
in __init__.py
Attached are a few screenshots of the setup:
Run again this morning:
I agree, llama2 is weak, and I was wondering if I can use a custom llm provider, and use their API KEY token, instead of OpenAI.
closing this issues, as it was discussed and solve in the Discord channel
Is this feature available yet?