nus-apr / auto-code-rover

A project structure aware autonomous software engineer aiming for autonomous program improvement. Resolved 30.67% tasks (pass@1) in SWE-bench lite and 38.40% tasks (pass@1) in SWE-bench verified with each task costs less than $0.7.
Other
2.72k stars 288 forks source link

how to interpret results for Auto Code Rover SWE-bench? #47

Open ramsey-coding opened 6 months ago

ramsey-coding commented 6 months ago

I am trying to understand results for Auto Code Rover and SWE-Agent.

Can you please let me know the format of the SWE-Agent test results in: https://github.com/nus-apr/auto-code-rover/tree/main/results/swe-agent-results

What are all these cost_2_1, cost_2_2, and cost_2_3?

How can I to understand the results in this directory?

Also for Auto Code Reover, I see acr-run-1, acr-run-2, acr-run-3. Which one should I take? Which result are you reporting in the paper?

ramsey-coding commented 6 months ago

what's the difference between the following fields?

        "generated": 249,
        "with_logs": 249,
        "applied": 245,
        "resolved": 48
zhiyufan commented 6 months ago

cost_X_Y: X is the budget cost of running swe-agent in our experiment, and Y is the trail of repetition. In this case, we used a budget of 2 USD, and repeated the experiment 3 times. Inside the cost_X_Y directory, *.traj files are the conversation log files for each task instance in swe-bench. all_pred.jsonl includes all the generated patches.

For AutoCodeRover acr-run-1, acr-run-2, and acr-run-3 results align with Table-3, In our environment, the ACR column.

The details on how the stats are generated can be found here: https://github.com/yuntongzhang/SWE-bench/blob/main/metrics/report.py#L264C5-L264C21