muupan / dqn-in-the-caffe

An implementation of Deep Q-Network using Caffe
MIT License
213 stars 118 forks source link

no good results #6

Closed arashno closed 9 years ago

arashno commented 9 years ago

Hello I am trying to train a network for playing pong ROM, But the results was not good so far, I have trained 5 networks. After 2 million iterations, my best evaluation score is -12. I am using default parameters for solver. My another issue is that the training does not stop after 2 million iterations (max_iter param) and it continues to training. What is wrong about my experiments? Thanks in advance

nakosung commented 9 years ago

Google released their deep mind's playing atari code which is written in lua/theano.

arashno commented 9 years ago

Do you have a link for their code?

ghost commented 9 years ago

https://sites.google.com/a/deepmind.com/dqn/

muupan commented 9 years ago

@arashno Your results are so bad that I think there must be some problems. Which version of Caffe & ALE are you using? My Pong experiment used:

Unfortunately, training will not stop at the moment. You can easily modify dqn_main.cpp to make it stop by checking dqn.current_iteration(), though.

arashno commented 9 years ago

@muupan I am using your recommended version of caffe and ALE 0.4.4 It seems that there is a problem with ALE randomization, ALE shows that it is using TIME random seed but each time I want to evaluate my trained networks ALE plays the exact same game. How can I fix this problem? Thanks

arashno commented 9 years ago

@muupan Still I have problem with the results. Could you please help me and also provide me your own results? Thanks

muupan commented 9 years ago

Sorry for no reply. Could you give it a try with this trained model? https://www.dropbox.com/s/vrvpu69d1cr4a7d/ec2_pong_5m.caffemodel?dl=0 ./dqn -gui -evaluate -model ec2_pong_5m.caffemodel -rom pong.bin It's not the one used in the demo video, but it can also successfully play pong in my PC. If it works in your environment too, the problem is just about training.

arashno commented 9 years ago

@muupan Thanks for your reply. I tried your trained model. The score was +17(21 to 4) for DQN so it works. I ran the model several times, and the model was playing the exact same game every time. Actually I expect some level of variations between different runs. Is this normal for the code or something is wrong about my environment? My second question is that what could be wrong about my training? My last question is about parameters, the Lua code (see the above comments for the link) is using slightly different parameters for training, What parameters should I use to achieve good results? (From the name of your trained model file it seems that you have used 5 million iterations which is different from the default parameters. are you using different parameter set?) Thanks

muupan commented 9 years ago

There's at least two random generators in the program: the one used in ALE and the one in DQN. The ALE random generator might not affect the results at all because pong is probably a deterministic game. The seed of DQN random generator is set to zero in the constructor of DQN class, so it will choose actions in the same way between different runs if the network parameters are the same. You can change that behavior by changing the seed value by modifying the code.

I don't have any clear idea of what can be wrong about your training. If your five trained nets are completely the same, try other seed values; it's possible that you were just too unlucky.

My uploaded model used slightly different parameters:

net: "dqn.prototxt"
solver_type: ADADELTA
momentum: 0.95
base_lr: 0.2
lr_policy: "fixed"
stepsize: 10000000
max_iter: 10000000
display: 100
snapshot: 50000
snapshot_prefix: "dqn"

This solver doesn't decay the learning rate, and training lasts 10 million iterations. According to my observation, using this will eventually result in better results than using the default one.

There's many differences between their lua code and mine, not only in parameter values but also in algorithm details. For example, they use RMSProp for optimization while mine uses AdaDelta.

nakosung commented 9 years ago

Based on my experience RMSProp outperforms.

muupan commented 9 years ago

@nakosung You're probably right given that the DeepMind people are still using RMSProp in their Nature paper. I chose AdaDelta only because it was available as a PR at that time.

nakosung commented 9 years ago

@muupan My RMSprop implementation is available on nakosung/caffe@1509647963e. It is a little bit weird because of 'fluent pattern' I introduced.

arashno commented 9 years ago

@nakosung @muupan Thanks for your Comments. Is there any other major differences between the Lua code and this code? I tried to use MSPROP, I downloaded nakosung/caffe@1509647 (I mean the whole branch)and make it, the source files seems OK but when I am trying to run it gives me this error: [libprotobuf ERROR google/protobuf/text_format.cc:245] Error parsing text-format caffe.SolverParameter: 3:1: Unknown enumeration value of "RMSPROP" for field "solver_type". What's wrong with it?

arashno commented 9 years ago

Also the proto file seems OK because it contains RMSPROP = 4; in line 150. I have cleaned and rebuild both caffe and dqn-in-caffe, But still the problem exist

nakosung commented 9 years ago

@arashno Maybe the changelist does not make a good build. Could you try another newer changelist? Sorry for your inconvenience. (Or maybe your repo contains two different versions of proto generated file-set)

arashno commented 9 years ago

@nakosung Sorry There was a problem with my rebuild. When I am using MSPROP, the Q-values go very large! Thanks

nakosung commented 9 years ago

@arashno exploding network is a common problem for DQN because it is iterative. You can try various techniques to avoid it. (like dropout)

arashno commented 9 years ago

@nakosung But I am just trying to replicate the original paper. The authors have used RMSPROP and they didn't used dropout. Also they didn't mentioned any other technique to avoid exploding.

muupan commented 9 years ago

@arashno You need to carefully select learning rate and discount factor for Q-learning, otherwise Q-values can diverge.

nakosung commented 9 years ago

@arashno My rmsprop implementation requires tiny learning rate.

For my experience training DQN isn't so straightforward like other well known problems. Because the process is iterative, some tiny difference can lead to divergence. If you want to reproduce deep-mind paper's work, I would recommend try their implementation.

muupan commented 9 years ago

@mohammad63 I have no experience with @nakosung 's RMSProp implementation, so don't ask me how to compile it.

arashno commented 9 years ago

@nakosung Did you tried your RMSProp implementation with this DQN code? What parameters did you use? How good was the results? Is RMSProp working better for you than ADADELTA? Thanks

nakosung commented 9 years ago

@arashno I had not tried with muupan's dqn. I'm doing my experiment with my own implementation. Parameters I used is like: lr = 0.001, momentum ~= 0.6, but I don't remember what parameter was used for rmsprop factor. As for my experience ADADELTA doesn't seem to be good as RMSProp for keeping network healthier(because it is more sensitive to glitches)

arashno commented 9 years ago

@nakosung Is your implementation of DQN is publicly available?

nakosung commented 9 years ago

@arashno unfortunately, no

arashno commented 9 years ago

It seems that I was too unlucky, results are acceptable now.

toweln commented 8 years ago

i ran the above ec2_pong_5m.caffemodel and i always get -21. what could be the problem?