Closed rucnyz closed 2 weeks ago
Tagging @carlosejimenez to help with this
Commit dee44dcd41e1a69d222d2661a069be1e7061c112 should fix this. I've added model_name_or_path
as an argument and that should make it so that the field is never a NoneType
.
Thank you so much for the extremely detailed report + discussion, it made the fix very easy to write. Really appreciate it! 😄
Describe the bug
In the file inference/run_llama.py, lines 299-304 contain the following code:
The model_name_or_path is set to peft_path. If we run the code with a model that doesn't use an adapter (for example, princeton-nlp/SWE-Llama-13b), the model_name_or_path in the stored results will be None. This causes an error in run_evaluation.py at line 134:
Output:
Steps/Code to Reproduce
Run inference/run_llama.py using a model without an adapter, such as princeton-nlp/SWE-Llama-13b.
Then run run_evaluation.py and observe the error at line 134.
Expected Results
No error is thrown
Actual Results
System Information
swebench = 1.1.5