philtabor / Deep-Q-Learning-Paper-To-Code

MIT License
342 stars 145 forks source link

Network is not learning when convolutional layers are applied. #14

Open DBaller opened 2 years ago

DBaller commented 2 years ago

Hey Phil! Thanks for the course. I'm really enjoying it so far.

I've implemented the first real Deep Q Network, and it is not learning. Whenever I take off the convolutional layers and just use the fully connected layers and test it on the CartPole-v1 it is able to learn, however, whenever I test it on Pong or Breakout with the convolutional layers it does not work. I've gone through all of my code many times. I don't know what it is I messed up. I've checked the wrapper, the network, the agent, even the main loop. Could it possibly be my imports?

I'm not sure what's the best way to upload code. Let me know if there is a better way. ExperienceReplay.txt GymWrapper.txt TrainAgent.txt DeepQNetwork.txt

DBaller commented 2 years ago

Hey Phil,

I did some more experimenting and am finding that my observations are returning values of 0. It is not an issue for the game pong, but it is for all other atari games.

BreakoutObservationOutput BoxingObservationOutput PongObservationOutput

Any reason as to why this may be?

DBaller commented 2 years ago

The first 2 screen shots are of breakout and boxing, the last one (with the filled tensors) is from Pong