Closed dev-rinchin closed 2 years ago
I suppose it's based off https://github.com/openml/automlbenchmark/issues/264. But no, we did not discuss bumping the lightautoml
version for the experiments.
@dev-rinchin the experiments are already underway. To keep things clear we keep framework versions fixed through the experiments (unless there are technical issues, in which case we allow an update which specifically only targets those technical issues). We do regret it's taking a few months instead of a few weeks (versions were fixed end of September), but still feel we're in a fair window. For this reason I will close the PR.
Note that it's still possible to benchmark any lightautoml
version with the tool by using custom framework definitions, or the most recent versions by using latest
(from repo's master
) or stable
(from pypi).
I didn't discuss it with Pieter, just discovered this PR: https://github.com/openml/automlbenchmark/commit/31f18e8ce9d161d8cbb773d1c384737b964c24a6 so I thought we could still update the version and we can discuss it here. As I see now, it's wrong, thank you for the answer.
Sorry for the confusion. That version bump was indeed to fix technical difficulties (in this case, the framework wouldn't install). The only changes made after September to that framework were to fix a naming issue for a function and change the way dependencies were installed. This allowed us to install and evaluate mlr3automl
. The core optimization of mlr3automl
is identical to the version released in September.
@PGijsbers is this upgrade the result of a discussion with you? Surprised to see this PR in the definitions file for the current experiment and targetting
stable-v2
.