stanfordnlp / wge

Workflow-Guided Exploration: sample-efficient RL agent for web tasks
https://stanfordnlp.github.io/wge/
Other
108 stars 33 forks source link

How to test the trained model? #9

Closed kahnlee closed 5 years ago

kahnlee commented 6 years ago

Hi,

Thanks for the great repo. I trained a model on the task "login-user" and save the checkpoints. I want to test the model, but the documentation doesn't explain the test. Is there a test program? Or please help me any guide and tip for testing the model.

Thanks. SeungKwon

ezliu commented 6 years ago

The main training loop already periodically tests the model. The frequency is controlled by config.log.evaluate (the number of episodes between evaluations). If you want to evaluate from a checkpointed model, you can do the following:

Assuming that your checkpointed model is saved in an experiment directory N_experiment_name, you can reload the model by running python main.py N. This will reload the config saved in N_experiment_name/config.txt as well, so if you want to immediately test, you could set the value of config.log.evaluate = 1 in that config.

kahnlee commented 5 years ago

I tested the model following your explanation. Thanks for the kind answer.

bhoomit commented 5 years ago
(env) MacBook-Air:wge sonal$ python main.py 41
usage: main.py [-h] [-s CONFIG_STRINGS] [-c CHECK_COMMIT] [-p]
               [-d DESCRIPTION] [-n NAME] [-r SEED] -t TASK
               config_paths [config_paths ...]
main.py: error: the following arguments are required: -t/--task

This doesn't work anymore

ppasupat commented 5 years ago

Could you try adding the -t option specifying the task name? The config data/experiments/41_[somestring]/config.txt should have the task name listed.