broadinstitute / AutoTrain

Using RL to solve overfitting in neural networks
0 stars 0 forks source link

Agent and Environment Integration #9

Closed ctrlnomad closed 3 years ago

ctrlnomad commented 3 years ago

DQN Agent Integration Into The AutoTrain Environment

TODOs:

What experiments to run?

jccaicedo commented 3 years ago

You can remove any dependency that we have with the Cart Pole example. The idea is to replace it with code that works for our problem.

The first experiment to run would be to have the agent successfully interacting with our environment. Let's define success here as running one experiment for 100 episodes with no errors. And let's measure execution time as a reference for future experiments.