Open fze0012 opened 1 year ago
The best score is known only for the tabular benchmarks. For the nn
benchmarks the following should work:
from hpobench.benchmarks.ml import TabularBenchmark
b = TabularBenchmark(model="nn", task_id=31)
b.global_minimums
In this file, what is the mean of the prefix ori e.g. ori-test?
Hi,
This docstring is borrowed from the NASBench-201 paper release and thus the actual details can be found here. This is likely to indicate the numbers on the original test set.
I shall close this issue for now as there is nothing about HPOBench here. Please feel free to reopen or ask any further queries.
https://github.com/automl/HPOBench/blob/47bf141f79e6bdfb26d1f1218b5d5aac09d7d2ce/hpobench/benchmarks/nas/nasbench_201.py#L262-L264 For test_accuracies and test_losses, why the valid_key is used rather than the test_key?
For different task numbers, how can I know the best results for canculating the simple or inference regret?