maxpumperla / deep_learning_and_the_game_of_go

Code and other material for the book "Deep Learning and the Game of Go"
https://www.manning.com/books/deep-learning-and-the-game-of-go
989 stars 390 forks source link

Chapter 5: can't reproduce "often end up with more than 95% accuracy in less than 10 epochs" #17

Open jeffhgs opened 5 years ago

jeffhgs commented 5 years ago

Chapter 5 reads:

But it’s noteworthy to observe that you often end up with more than 95% accuracy in less than 10 epochs.

However, I don't share this experience.

I have captured some scenarios in a branch.

https://github.com/jeffhgs/deep_learning_and_the_game_of_go/blob/test_nn_chapter_5/code/dlgo/nn/test_nn.py

You can run as:

(cd code/dlgo/nn && python test_nn.py)

The program downsamples mnist train and test data, train, repeats, and then computes summary statistics.

The program uses the unittest framework for regression testing, and it also emits json documents between scenarios for offline analysis.

The program depends on the incomplete branch I made for #12.

The typical fitting accuracy I get for the full mnist dataset is around 57%.

For purposes of helping your reproduction, I am holding back all inessential changes, including fix for performance related #11.

I tried to run all the configurations listed in the test, but unfortunately my job just ran past the 4h timeout I assigned it, so I'll have to up it and try again. I will attach the 4 instance hours worth of data I have.

This is my configuration:

host: AWS t2.large
OS: ubuntu
Python: 3.6.8
jeffhgs commented 5 years ago

test_nn_jeffhgs_2019-02-20_try1b.txt

jeffhgs commented 5 years ago

Data from a longer run. Note this timed out also, so doesn't end cleanly. But you can clearly see the fit measurements. Each full data set run just under 2h for 10 epochs.

test_nn_jeffhgs_2019-02-20_try3.txt

maxpumperla commented 5 years ago

that's pretty strange. last time I evaluated this, I certainly ended up with 95% plus, most of the time. I mean, not that it really matters, the lesson is that the algo learns. But I get your frustration, sorry for the inconvenience.

My suspicion is that I used a different learning rate altogether for my experiments.

arisliang commented 3 years ago

For me, the network doesn't train most of the time.

image

arisliang commented 3 years ago

Shallower and smaller network architecture, and dense weight initial value scaled with 0.1 worked better for me.

image

Rock-New commented 1 week ago

I met the same problem. What can I do to solve it?