karpathy / nn-zero-to-hero

Neural Networks: Zero to Hero
MIT License
11.63k stars 1.46k forks source link

Bug on `lectures/micrograd/micrograd_lecture_second_half_roughly.ipynb` #52

Open younes-io opened 4 months ago

younes-io commented 4 months ago

First of all, thank you @karpathy for this amazing repo.

I think the exp function in the second half of the first lecture of micrograd has a bug:

def exp(self):
    x = self.data
    out = Value(math.exp(x), (self, ), 'exp')

    def _backward():
      self.grad += out.data * out.grad # NOTE: in the video I incorrectly used = instead of +=. Fixed here.
    out._backward = _backward

    return out

The gradient should be calculated like this: self.grad += self.data * out.grad

What do you think?

haduoken commented 3 months ago

local grad is exp(x), so we use out.data, not self.data @younes-io