broadinstitute / AutoTrain

Using RL to solve overfitting in neural networks
0 stars 0 forks source link

Measuring performance of the agent #10

Open jccaicedo opened 3 years ago

jccaicedo commented 3 years ago

The agent interacts with the environment to learn how to solve a classification problem. If we let the agent learn for N episodes, how can we track its performance? How do we test it after training is done?

Here are several performance metrics that we can track during training (per episode):

  1. Final training accuracy and training loss.
  2. Total number of actions vs effective number of training steps.
  3. Validation accuracy in the hold out set.
  4. Execution time (this is not in direct control of the agent, but good to track).

After the agent finished training, we can evaluate a the resulting classifier in several ways:

  1. Make the agent train a classifier from scratch and validate it in a test set (a second hold out).
  2. Take the best classifier trained by the agent during the entire training session and evaluate it in the test set.
  3. Make the agent train a classifier in another (related) problem and evaluate it there.

This issue is open for discussions. Alternatives or extensions are welcome!