An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.
I have successfully used NNI for classic HPO experiments, leveraging configurations such as those detailed below. My setup includes a search_space.json, config.yml, and a trial.py script:
def main(params):
x, y = params['x'], params['y']
result = f(x, y)
nni.report_final_result(result)
if name == "main":
params = nni.get_next_parameter()
main(params)
Now, I am interested in exploring the Retiarii block to conduct experiments that involve evolving neural network architectures. Could anyone provide guidance or share examples on how to set up the necessary files and configurations for a Retiarii experiment, especially with a config file aside?
I have successfully used NNI for classic HPO experiments, leveraging configurations such as those detailed below. My setup includes a search_space.json, config.yml, and a trial.py script:
search_space.json:
config.yml:
trial.py:
def f(x, y): return x - y
def main(params): x, y = params['x'], params['y'] result = f(x, y) nni.report_final_result(result)
if name == "main": params = nni.get_next_parameter() main(params)