princeton-nlp / SWE-bench

[ICLR 2024] SWE-bench: Can Language Models Resolve Real-world Github Issues?
https://www.swebench.com
MIT License
2k stars 348 forks source link

Make `run_report` more intuitive when using `instance_ids` filter during evaluation #207

Closed carlosejimenez closed 3 months ago

carlosejimenez commented 3 months ago

Right now, when the run_report is reported and saved at the end of an evaluation run, the instance_ids filter is being ignored, which can be confusing and causes errors in the report for fields like error_ids and some other things.

This PR changes the make_run_report function to filter the full dataset considered and compare only against instance_ids instead of the complete dataset's unfiltered ids.

It should make it easier to see which ids actually had errors when running, as well as make it easier to understand performance from the run_report when using the instance_ids filter.

codecov[bot] commented 3 months ago

Codecov Report

Attention: Patch coverage is 73.68421% with 5 lines in your changes missing coverage. Please review.

Project coverage is 54.62%. Comparing base (a8df201) to head (312b914). Report is 4 commits behind head on main.

Files Patch % Lines
swebench/harness/utils.py 66.66% 5 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #207 +/- ## ========================================== - Coverage 58.24% 54.62% -3.62% ========================================== Files 20 20 Lines 1971 1977 +6 ========================================== - Hits 1148 1080 -68 - Misses 823 897 +74 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.