openml / automlbenchmark

OpenML AutoML Benchmarking Framework
https://openml.github.io/automlbenchmark
MIT License
391 stars 130 forks source link

'NoResultError' raised by FLAML Framework During Benchmark Execution #623

Closed tomsz92161 closed 1 week ago

tomsz92161 commented 1 month ago

Hello,

I have just installed AutoML Benchmarck in an environment running Python 3.9.17. However, when I execute the command python runbenchmark.py flaml, I encounter the following error:

NoResultError: best_iteration is only defined when early stopping is used.

Is there a workaround to resolve this issue?

Thank you.

error_traceback.txt

israel-cj commented 1 month ago

I had the same problem; it was related to flaml version 1.2.4 being used. Force the latest version: python runbenchmark.py flaml:latest

PGijsbers commented 1 month ago

I'll look into what exactly is causing this issue, but in the meanwhile, I hope @israel-cj's fix works for you!

tomsz92161 commented 1 month ago

Thanks to both of you for the quick replies.

To enable execution with the tag "latest," I added the following entry to the file ./resources/frameworks_latest.yaml:

flaml:
  version: latest

However, python runbenchmark.py flaml:latest raised a different error.

error_traceback_2.txt

israel-cj commented 4 weeks ago

For me, it worked when I force (-s force) the setup mode as well (you only need to do this once): python runbenchmark.py flaml:latest -s force

PGijsbers commented 2 weeks ago

A couple of different issues were at play. In short: