Open nashid opened 1 month ago
Hi @nashid, Thank you for your inquiry. We have now published the raw results of our benchmark (i.e. the traces of the agents, the predicted patches and the logs of the evaluation harness) on our main README. You can find them here: https://github.com/logic-star-ai/swt-bench/tree/master?tab=readme-ov-file#evaluation-results
Below is a JSON dump containing all successfully resolved instances for each approach.
Please refer to the patch prediction files or the harness logs (_approach_/_model_/_instance_/extracted_patch.diff
) for the respective test cases.
@nielstron thanks for your response. If I understand correctly, the JSON refers to the list of resolved instances per approach. However, I am more interested in identifying which GitHub issues had correct test cases generated by these agents. Could you provide a list of those issues and the specific test cases generated for each one?
Hi, please look through the evaluation harness logs. They each contain a detailed list of what test case was predicted, which test cases are P->F, P->P, etc
You can find these details for each run in _approach_/_model_/_instance_
https://files.sri.inf.ethz.ch/swt-bench/run_instance_swt_logs/
For example, the file aider_gpt-4-1106-preview/aider_gpt-4-1106-preview/astropy__astropy-7746/report.json
contains the following complete list of changed tests due to the prediction in tests_pred
, where the FAIL_TO_PASS
section would be the "correct test cases" you are referring to:
"astropy__astropy-7746": {
"patch_is_None": false,
"patch_exists": true,
"patch_successfully_applied": true,
"resolved": false,
"coverage_pred": 0.75,
"coverage_gold": 1.0,
"coverage_base": 0.5,
"coverage_delta_pred": 0.5,
"coverage_delta_gold": 1.0,
"added_f2p": 0,
"tests_base": {
"FAIL_TO_PASS": [],
"PASS_TO_PASS": [
"astropy/wcs/tests/test_wcs.py::test_inconsistent_sip",
...
"astropy/wcs/tests/test_wcs.py::test_error_message",
"astropy/wcs/tests/test_wcs.py::test_sip_broken",
"astropy/wcs/tests/test_wcs.py::test_broadcasting"
],
"FAIL_TO_FAIL": [],
"PASS_TO_FAIL": [],
"UNMATCHED": []
},
"tests_pred": {
"FAIL_TO_PASS": [],
"PASS_TO_PASS": [
"astropy/wcs/tests/test_wcs.py::test_inconsistent_sip",
....
"astropy/wcs/tests/test_wcs.py::test_sip_broken",
"astropy/wcs/tests/test_wcs.py::test_broadcasting"
],
"FAIL_TO_FAIL": [
"astropy/wcs/tests/test_wcs.py::test_empty_world2pix",
"astropy/wcs/tests/test_wcs.py::test_empty_pix2world_array",
"astropy/wcs/tests/test_wcs.py::test_empty_world2pix_array",
"astropy/wcs/tests/test_wcs.py::test_empty_pix2world"
],
"PASS_TO_FAIL": [],
"UNMATCHED": []
},
"tests_gold": {
"FAIL_TO_PASS": [
"astropy/wcs/tests/test_wcs.py::test_zero_size_input"
],
"PASS_TO_PASS": [
"astropy/wcs/tests/test_wcs.py::test_inconsistent_sip",
"astropy/wcs/tests/test_wcs.py::test_passing_ImageHDU",
...
"astropy/wcs/tests/test_wcs.py::test_sip_broken",
"astropy/wcs/tests/test_wcs.py::test_broadcasting"
],
"FAIL_TO_FAIL": [],
"PASS_TO_FAIL": [],
"UNMATCHED": []
}
}
}
i.e. in the above case, the model did not predict a single FAIL_TO_PASS
case but the golden suite contains exactly one, namely test_zero_size_input
. The predicted tests can be found in aider_gpt-4-1106-preview/aider_gpt-4-1106-preview/astropy__astropy-7746/extracted_patch.diff
, in this case the predicted cases that are shown under FAIL_TO_FAIL
:
diff --git a/astropy/wcs/tests/test_wcs.py b/astropy/wcs/tests/test_wcs.py
index 85853e10e5..0fb7d22416 100644
--- a/astropy/wcs/tests/test_wcs.py
+++ b/astropy/wcs/tests/test_wcs.py
@@ -1093,3 +1093,31 @@ def test_keyedsip():
assert isinstance( w.sip, wcs.Sip )
assert w.sip.crpix[0] == 2048
assert w.sip.crpix[1] == 1026
+def test_empty_pix2world():
+ # Test for passing empty lists/arrays to wcs_pix2world
+ wcs = WCS(get_pkg_data_filename('data/sip.fits'))
+ result = wcs.wcs_pix2world([], [], 0)
+ assert result == ([], [])
+
+def test_empty_world2pix():
+ # Test for passing empty lists/arrays to wcs_world2pix
+ wcs = WCS(get_pkg_data_filename('data/sip.fits'))
+ result = wcs.wcs_world2pix([], [], 0)
+ assert result == ([], [])
+
+def test_empty_pix2world_array():
+ # Test for passing empty numpy arrays to wcs_pix2world
+ wcs = WCS(get_pkg_data_filename('data/sip.fits'))
+ result = wcs.wcs_pix2world(np.array([]), np.array([]), 0)
+ assert result == (np.array([]), np.array([]))
+
+def test_empty_world2pix_array():
+ # Test for passing empty numpy arrays to wcs_world2pix
+ wcs = WCS(get_pkg_data_filename('data/sip.fits'))
+ result = wcs.wcs_world2pix(np.array([]), np.array([]), 0)
+ assert result == (np.array([]), np.array([]))
+
+from astropy.wcs import WCS
+import numpy as np
+from astropy.utils.data import get_pkg_data_filename
+
We also provide a host of tooling for assessing the logs provided in the figures
folder, which is specificially the code used to compute the numbers for the tables and figures in the final version of the paper.
I hope this helps UPDATE: added more concrete examples
Describe the issue
Congrats on your excellent work on Code Agents for automated test generation!
I am interested in gaining a deeper understanding of the results presented in your paper, especially regarding the performance of different agents at generating test cases.
Could you kindly provide a list of all SWE-BENCH instances where different agents (e.g., LIBRO, SWE-AGENT, AUTOCODEROVER, etc.) successfully generated test cases?
Additionally, could you share the test cases generated by these agents for the respective instances?
Thank you again for sharing the artifact and for considering my request!
Suggest an improvement to documentation
No response