Codium-ai / cover-agent

CodiumAI Cover-Agent: An AI-Powered Tool for Automated Test Generation and Code Coverage Enhancement! 💻🤖🧪🐞
https://www.codium.ai/
GNU Affero General Public License v3.0
4.23k stars 298 forks source link

cover-agent fails when using ollama/deepseek-v2 - IndexError: list index out of range #71

Closed gklein closed 2 months ago

gklein commented 4 months ago

cover-agent failed when using ollama/deepseek-v2 - IndexError: list index out of range


poetry run cover-agent \
  --source-file-path "templated_tests/python_fastapi/app.py" \
  --test-file-path "templated_tests/python_fastapi/test_app.py" \
  --code-coverage-report-path "templated_tests/python_fastapi/coverage.xml" \
  --test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
  --test-command-dir "templated_tests/python_fastapi" \
  --coverage-type "cobertura" \
  --desired-coverage 80 \
  --max-iterations 1000 \
  --model ollama/deepseek-v2 \
  --api-base http://localhost:11434
2024-06-02 08:20:04,551 - cover_agent.UnitTestGenerator - INFO - Running build/test command to generate coverage report: "pytest --cov=. --cov-report=xml --cov-report=term"
Streaming results from LLM model...

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.

Provider List: https://docs.litellm.ai/docs/providers

Error during streaming: Ollama Error - {'error': 'an unknown error was encountered while running the model '}

2024-06-02 08:20:06,548 - cover_agent.UnitTestGenerator - ERROR - Error during initial test suite analysis: list index out of range
Traceback (most recent call last):
  File "/Users/gklein/cover-agent/cover_agent/UnitTestGenerator.py", line 252, in initial_test_suite_analysis
    self.ai_caller.call_model(prompt=prompt_test_headers_indentation)
  File "/Users/gklein/cover-agent/cover_agent/AICaller.py", line 72, in call_model
    model_response = litellm.stream_chunk_builder(chunks, messages=messages)
                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/gklein/Library/Caches/pypoetry/virtualenvs/cover-agent-faactadJ-py3.11/lib/python3.11/site-packages/litellm/main.py", line 4273, in stream_chunk_builder
    if chunks[0]._hidden_params.get("created_at", None):
       ~~~~~~^^^
IndexError: list index out of range

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/Users/gklein/cover-agent/cover_agent/main.py", line 95, in main
    agent.run()
  File "/Users/gklein/cover-agent/cover_agent/CoverAgent.py", line 51, in run
    self.test_gen.initial_test_suite_analysis()
  File "/Users/gklein/cover-agent/cover_agent/UnitTestGenerator.py", line 294, in initial_test_suite_analysis
    raise "Error during initial test suite analysis"
TypeError: exceptions must derive from BaseException
mrT23 commented 4 months ago

are you sure that your deployment is working ? were you able to access the model without cover agent ? my guess is that the deployment is not working

also validate the litellm is supporting deepseek-v2. maybe we need a newer version

gklein commented 4 months ago

are you sure that your deployment is working ? were you able to access the model without cover >agent ? my guess is that the deployment is not working

The model seems to be functional on my machine when I'm using it without cover agent:

ollama run deepseek-v2
>>> write a python function that calculates a fibonacci sequence
​Here is a simple Python function to generate the Fibonacci sequence up to n terms. This implementation uses an iterative
approach, which should be efficient for most practical purposes.

python
def fibonacci(n):
    if n <= 0:
        return "Input must be positive integer"
    elif n == 1:
        return [0]
    elif n == 2:
        return [0, 1]
    else:
        sequence = [0, 1]
        while len(sequence) < n:
            next_value = sum(sequence[-2:])
            if next_value > n:
                break
            sequence.append(next_value)
        return sequence

You can use this function by calling it with the number of terms you want in the Fibonacci sequence as an argument. For
example, `fibonacci(10)` will generate the first 10 numbers in the Fibonacci sequence: [0, 1, 1, 2, 3, 5, 8, 13, 21, 34].

also validate the litellm is supporting deepseek-v2. maybe we need a newer version

In this case I'm using ollama for the inference so I don't think any special support is needed.

mrT23 commented 4 months ago

The error you sent means it failed to communicate with the model. i will try later on my machine

Unrelated, but since you seem to be interested in non-gpt4 models for code (:-)), you might find this relevant: https://pr-agent-docs.codium.ai/finetuning_benchmark/

gklein commented 4 months ago

The error you sent means it failed to communicate with the model. i will try later on my machine

I think you are right, it might be a bug with ollama, I see this on the ollama serve output when running cover-agent with deepseek-v2:

time=2024-06-02T11:12:29.248+03:00 level=WARN source=server.go:448 msg="llama runner process no longer running" sys=6 string="signal: abort trap"

Unrelated, but since you seem to be interested in non-gpt4 models for code (:-)), you might find this relevant: https://pr-agent-docs.codium.ai/finetuning_benchmark/

Thanks for sharing! Is it possible to add more models to the leaderboard, e.g.: DeepSeek V2 and AutoCoder?

EmbeddedDevops1 commented 2 months ago

@gklein Can we close out this issue?

gklein commented 2 months ago

Yes it was a bug with Ollama