Avsecz / gin-train

Tracking ML experiments using gin-config, wandb, comet.ml and S3.
MIT License
5 stars 3 forks source link

Hyper-parameter optimization #4

Open Avsecz opened 5 years ago

Avsecz commented 5 years ago

Main goal would be to define the Objective analogous to kopt' CompileFN but now using the gin-config. Arguments of Objective would be same as arguments to gin_train but where the gin-config files would be normal gin files which would be overriden using gin bindings (https://github.com/Avsecz/gin-train/blob/master/gin_train/cli/gin_train.py#L187). E.g. either pass to parse_config_files_and_bindings as bindings or specify by:

gin.bind_parameter('supernet.num_layers', 5)
gin.bind_parameter('supernet.weight_decay', 1e-3)

where values can be any valid python object (lists, tuples, dicts, strings). Note that if we use bind_parameter, then finalize() should only be called after we have bound all the parameters.

import json
def config2bindings(config):
    return [f"{k} = {json.dumps(v)}" for k,v in config.items()]

config = {'asd': [1,2,3],
   ...: "dsa": 10,
   ...: "dsads": "11",
   ...: "dsasdsadas": {"a": 1}}

bindings = config2bindings(config)

In [9]: for p in bindings: print(p)
asd = [1, 2, 3]
dsa = 10
dsads = "11"
dsasdsadas = {"a": 1}

Both assume that the dictionary would solely be a key-value mapping which might contain dictionaries / lists as values but these will not be interpreted as nested variables.

Note - note_params should be used to keep track of the hyper-parameter optimization study and the run-id

Additional arguments to Objective

Multiple different Objective versions would need to be implemented, one for each hyper-parameter optimization system.

Supported backends:

For more advanced scenarios we would probably need to implement the Trainable class ourselves

Hoeze commented 5 years ago

+1 for ray tune! I might look into this if I have to do more with this kind of problem