PacktPublishing / Hands-On-Deep-Learning-for-Games

MIT License
38 stars 17 forks source link

Chapter_1_3.py output is different from the book #1

Open Cappinator opened 5 years ago

Cappinator commented 5 years ago

The output is:

epoch=0, lrate=0.100, error=5.000 epoch=1, lrate=0.100, error=5.000 epoch=2, lrate=0.100, error=5.000 epoch=3, lrate=0.100, error=5.000 epoch=4, lrate=0.100, error=5.000 epoch=5, lrate=0.100, error=5.000 epoch=6, lrate=0.100, error=5.000 epoch=7, lrate=0.100, error=5.000 epoch=8, lrate=0.100, error=5.000 epoch=9, lrate=0.100, error=5.000 [-4.999999999999998, -14.900000000000002, -15.100000000000007]

Not at all converging as the book mentions, so there must be an error in the code.

vskabelkin commented 4 years ago

Same. There are typos in activation functions. For the first simple perceptron it should be like: return 1.0 if activation >= 0.0 else 0.0 For RELU is should be: return activation if activation > 0.0 else 0.0

Also the third input is [1.0,11.0,1.0] which is a typo or a put-up outlier.

ondkeso commented 2 years ago

Also found this when doing this exercise today. Another way of implementing ReLU follows

def ReLU(activation)
    return max(activation, 0.0)