This PR includes some commits that were merged into master directly before switching to the dev branch (regarding EvalPipeline using MetaDataset and something about coveralls).
Fixes #227 and parts of #226.
Besides a bit of cleaning up and fixes, we now have:
datasets dict in config which should be datasetname: datasetpath. Results now have the form datasetname/op/... which makes it easy to specify on which datasets ops should be executed. This is still a bit experimental but the default works quite well: train/train_op is executed every step in train mode, train/log_op and validation/log_op are logged every log_freq steps, validation/eval_op runs after each epoch (including callbacks for it, the results of which are logged, too). For backwards compatibility, if no datasets dict is found, it is set to train: dataset, validation: validation_dataset.
shortcut for easier resuming and evaluation: -p <rundir/configs/config.yaml> translates to -b <rundir/configs/config.yaml> -p <rundir>, so you don't have to specify project and config seperately anymore.
Support for evaluation functors. Running in test mode (without -t) and specifying --eval_functor <keypath> will init the object under keypath by passing it the config, and replace validation/eval_op with the functors __call__ method, passing model, **batch to it. If the functor has a callbacks attribute, they will be used.
With the last two points, you can write a simple Functor class to evaluate existing models without changing any code related to that model, eg: edflow -p <path to config in project folder> --eval_functor <path to functor>
This PR includes some commits that were merged into master directly before switching to the dev branch (regarding EvalPipeline using MetaDataset and something about coveralls).
Fixes #227 and parts of #226.
Besides a bit of cleaning up and fixes, we now have:
datasets
dict in config which should bedatasetname: datasetpath
. Results now have the formdatasetname/op/...
which makes it easy to specify on which datasets ops should be executed. This is still a bit experimental but the default works quite well:train/train_op
is executed every step in train mode,train/log_op
andvalidation/log_op
are logged everylog_freq
steps,validation/eval_op
runs after each epoch (including callbacks for it, the results of which are logged, too). For backwards compatibility, if nodatasets
dict is found, it is set totrain: dataset, validation: validation_dataset
.-p <rundir/configs/config.yaml>
translates to-b <rundir/configs/config.yaml> -p <rundir>
, so you don't have to specify project and config seperately anymore.-t
) and specifying--eval_functor <keypath>
will init the object under keypath by passing it the config, and replacevalidation/eval_op
with the functors__call__
method, passingmodel, **batch
to it. If the functor has acallbacks
attribute, they will be used.With the last two points, you can write a simple Functor class to evaluate existing models without changing any code related to that model, eg:
edflow -p <path to config in project folder> --eval_functor <path to functor>