Open jeffhgs opened 5 years ago
Data from a longer run. Note this timed out also, so doesn't end cleanly. But you can clearly see the fit measurements. Each full data set run just under 2h for 10 epochs.
that's pretty strange. last time I evaluated this, I certainly ended up with 95% plus, most of the time. I mean, not that it really matters, the lesson is that the algo learns. But I get your frustration, sorry for the inconvenience.
My suspicion is that I used a different learning rate altogether for my experiments.
For me, the network doesn't train most of the time.
Shallower and smaller network architecture, and dense weight initial value scaled with 0.1 worked better for me.
I met the same problem. What can I do to solve it?
Chapter 5 reads:
However, I don't share this experience.
I have captured some scenarios in a branch.
You can run as:
The program downsamples mnist train and test data, train, repeats, and then computes summary statistics.
The program uses the unittest framework for regression testing, and it also emits json documents between scenarios for offline analysis.
The program depends on the incomplete branch I made for #12.
The typical fitting accuracy I get for the full mnist dataset is around 57%.
For purposes of helping your reproduction, I am holding back all inessential changes, including fix for performance related #11.
I tried to run all the configurations listed in the test, but unfortunately my job just ran past the 4h timeout I assigned it, so I'll have to up it and try again. I will attach the 4 instance hours worth of data I have.
This is my configuration: