Closed intfloat closed 8 years ago
Sorry, my pull request still contains bug, numpy.mean
can't be applied directly if last batch has different batch_size.
I'll close this pull request, but original issue remains.
Reducing learning rate sometimes helps. For this İ use old version of PDNN. İ change learning rate in the file 'training_state.tmp', and continue. But, it does not always help.
I didn't say anything about learning rate.
I mean, this line of code contains bug:
https://github.com/yajiemiao/pdnn/blob/master/learning/sgd.py#L71
which might lead to nan
when training, see my comment https://github.com/yajiemiao/pdnn/issues/28#issuecomment-163991276
Yes, I know. I said what I did
On Mon, Dec 14, 2015 at 12:20 PM, Liang Wang notifications@github.com wrote:
I didn't say anything about learning rate.
I mean, this line of code contains bug:
https://github.com/yajiemiao/pdnn/blob/master/learning/sgd.py#L71
which might lead to nan when training, see my comment #28 (comment) https://github.com/yajiemiao/pdnn/issues/28#issuecomment-163991276
— Reply to this email directly or view it on GitHub https://github.com/yajiemiao/pdnn/pull/32#issuecomment-164375370.
Please see my comment on issue #28, the boundary condition is not handled correctly.
For issue #31, @VeliBaba didn't give any description about data, I doubt it is likely to be the same issue.