Closed rohitpaulk closed 2 months ago
@rohitpaulk Are you able to provide the initial logs up until the first or two generated tests so we can investigate it?
For example:
$ poetry run cover-agent \
--source-file-path "templated_tests/python_fastapi/app.py" \
--test-file-path "templated_tests/python_fastapi/test_app.py" \
--code-coverage-report-path "templated_tests/python_fastapi/coverage.xml" \
--test-command "pytest --cov=. --cov-report=xml --cov-report=term" \
--test-command-dir "templated_tests/python_fastapi" \
--coverage-type "cobertura" \
--desired-coverage 70 \
--max-iterations 10
2024-05-21 09:30:48,069 - cover_agent.UnitTestGenerator - INFO - Running initial build/test command to generate coverage report: "pytest --cov=. --cov-report=xml --cov-report=term"
2024-05-21 09:30:49,060 - cover_agent.main - INFO - Current Coverage: 60.47%
2024-05-21 09:30:49,061 - cover_agent.main - INFO - Desired Coverage: 70%
2024-05-21 09:30:49,300 - cover_agent.UnitTestGenerator - INFO - Token count for LLM model gpt-4o: 1464
Streaming results from LLM model...
def test_current_date():
"""
Test the /current-date endpoint by sending a GET request and checking the response status code and JSON body.
"""
response = client.get("/current-date")
assert response.status_code == 200
assert "date" in response.json()
assert response.json()["date"] == date.today().isoformat()
def test_add():
"""
Test the /add/{num1}/{num2} endpoint by sending a GET request with two integers and checking the response status code and JSON body.
"""
response = client.get("/add/3/5")
assert response.status_code == 200
assert response.json() == {"result": 8}
@rohitpaulk There's also a test_results.html
that gets generated with a full breakdown containing the test and the errors along with it. Any chance you can include that as well?
Just checked test_results.html
, and it mostly contains errors that suggest there are syntax errors (as expected). Example:
I don't have the logs from when I ran this, but I did find a file called "run.log" in case that helps. Contents:
Also found generated_prompt.md
, which includes the reference file I started with:
Awesome. That was extremely helpful. So first of all it looks like we'll need an indent for your tests cases like we do for Python classes in cover_agent/FilePreprocessor.py
. We'll need someone with a bit more Ruby experience taking on that task or you could provide instructions in the --additional-instructions
flag (using GPT 4, not GPT 3.5). You could say something like this:
My ruby script requires tests to start with "RSpec.describe API::CourseStageFeedbackSubmissionsController, type: :request do" and every line thereafter must be indented with 4 whitespaces. Filled in the remaining tests using this format.
What would be the most helpful (and probably the easiest for you) would be to modify generated_prompt.md
manually and dump that into ChatGPT to see what results you get. That's, essentially, what's happening here with some port processing and subshell commands.
I tried with java and jacoco coverage but it appears to only be Python. Looking at the CoverageProcessor.py it appears cobertura is the only coverage_type. If you run my docker image docker run --rm -it --name cover-agent -e OPENAI_API_KEY=
I love this idea though and really want it to increase my java code coverage on our projects.
Has this tool been verified for Jest unit tests written in TypeScript?
@rohitpaulk and others:
I will work tomorrow on the prompt and logic area, and will bring some improvements. stay tuned. will be glad to hear your feedback afterwards
So it doesn't support multi languages yet ? On the README.md it is checked:
Being able to generate tests for different programming languages
I'm really interested on testing it with typescript and php.
This should bring significant improvements to the general usage, and specifically for non-python languages:
any confirmation of it working with ruby?
@jtoy et al, we finally got the Ruby example added to the repo. It's also part of our nightly regression testing so we'll know right away if support for Ruby breaks.
I'm going to close out this issue now since we've confirmed support for Ruby.
I was trying this on a Ruby codebase, and the suggested tests seemed to be Python tests. The README seems to mention that multi-language support is present.
Example of a generated test:
and here's what a test in the file currently looks like:
This is what my usage looks like: