karpathy / micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
MIT License
10.5k stars 1.52k forks source link

Adjusting parameters by sign and magnitude of gradient #65

Open kippsoftware opened 7 months ago

kippsoftware commented 7 months ago

https://github.com/karpathy/micrograd/blame/c911406e5ace8742e5841a7e0df113ecb5d54685/demo.ipynb#L271C13-L271C45

I really appreciate your videos! Such a gift to all of us.

When adjusting parameters after computing the loss, the example multiplies the step size by the sign and magnitude of the gradient. In cases of a steep gradients near local minimum values, a large value will jump the parameter far from the desired solution. In the case of shallow gradients, the parameter will struggle to reach its local minimum in the given number of iterations.

Thus, I think the adjustment should be a step size times the sign of the gradient.

What are your thoughts?

tawej commented 7 months ago

I think the learning rate decay should be helping in this point.