Closed kahnlee closed 5 years ago
The main training loop already periodically tests the model. The frequency is controlled by config.log.evaluate
(the number of episodes between evaluations). If you want to evaluate from a checkpointed model, you can do the following:
Assuming that your checkpointed model is saved in an experiment directory N_experiment_name
, you can reload the model by running python main.py N
. This will reload the config saved in N_experiment_name/config.txt
as well, so if you want to immediately test, you could set the value of config.log.evaluate = 1
in that config.
I tested the model following your explanation. Thanks for the kind answer.
(env) MacBook-Air:wge sonal$ python main.py 41
usage: main.py [-h] [-s CONFIG_STRINGS] [-c CHECK_COMMIT] [-p]
[-d DESCRIPTION] [-n NAME] [-r SEED] -t TASK
config_paths [config_paths ...]
main.py: error: the following arguments are required: -t/--task
This doesn't work anymore
Could you try adding the -t
option specifying the task name? The config data/experiments/41_[somestring]/config.txt
should have the task name listed.
Hi,
Thanks for the great repo. I trained a model on the task "login-user" and save the checkpoints. I want to test the model, but the documentation doesn't explain the test. Is there a test program? Or please help me any guide and tip for testing the model.
Thanks. SeungKwon