openai / mle-bench

MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering
https://openai.com/index/mle-bench/
Other
529 stars 59 forks source link

Update make_submission.py #15

Closed smit23patel closed 4 weeks ago

smit23patel commented 1 month ago
  1. Error Handling for Metadata File: Added try-except blocks to handle potential errors when opening and reading the metadata file.

  2. Error Handling for Output File: Added error handling when writing to the output file to catch any IO errors.

  3. Logging: Improved logging to provide clearer messages in case of errors.

thesofakillers commented 4 weeks ago

Hi. We do not judge these changes to be particularly necessary so I will be closing this PR. Thank you.