Closed SirRob1997 closed 4 years ago
I'm hesitant to support this case as a built-in for two reasons:
There's no easy way to see hparams corresponding to the best run, but a starting point for writing this functionality yourself would be here. You want to change this to something like map(lambda _, run_records: (self.run_acc(run_records), run_records[0]["hparams_seed"]) )
and then propagate that hparams_seed all the way to the final results table.
Yeah I can see the reasons, thanks for giving a starting point!
Just wanted to flag for you that this is implemented now in https://github.com/facebookresearch/DomainBed/blob/master/domainbed/scripts/list_top_hparams.py . It's still experimental so there might be bugs / frequent breaking changes.
Ah cool, I actually had a quick implementation running for myself as well. Yours seems a little cleaner though!
As it is currently implemented, launching a sweep with
n_trials
> 1 allows for reporting std. deviation of the best performing hyperparameter setting based on the different validation techniques. If I correctly understood the implementation, this computesn_trials
different in- and out-splits and runs them for each of the sampled hyperparameter settings.Users with limited resources who are implementing a new algorithm might not want to do that. A more convenient and faster way would be an option where users are able to tune hyperparameters and then run ONLY the best performing setting for each of the validation techniques on additional
n_trials
seeds to report std. deviations of choosing different in- and out-splits.Also: What is the current way of obtaining the best performing hyperparameter setting for each of the validation techniques?