princeton-nlp / SWE-bench

[ICLR 2024] SWE-Bench: Can Language Models Resolve Real-world Github Issues?
https://www.swebench.com
MIT License
1.45k stars 240 forks source link

Inference and evaluate on SWE-Bench faster by reusing previous built env #132

Closed Yuzz1020 closed 3 weeks ago

Yuzz1020 commented 4 weeks ago

Describe the issue

Hi,

Thank you for releasing this awesome repo! I've been enjoying reading and trying it out.

I noticed that the major bottleneck in efficiency is the time needed to clone and build the conda environment. This process seems to be required every time I do an evaluation. Thus, I am also wondering if there is a way to keep the conda environment I built previously to save some time.

Thank you for your help in advance!

Suggest an improvement to documentation

No response

Yuzz1020 commented 3 weeks ago

I figured this out.. No worries!