karpathy / micrograd

A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
MIT License
10.5k stars 1.52k forks source link

Update __pow__() and __rpow__() method #35

Closed arijit-hub closed 1 year ago

arijit-hub commented 1 year ago

What

I have added the utility of __pow__(self , other) __rpow__(self , other) methods where it can now use the base as well as the exponent as Value objects. It also allows for flow of gradient through the other attribute if and only if the other is of type Value. If otheris integer or float, we ignore the flow of gradient through other.


Why

This allows the __pow__(self,other) to be similar to the other basic methods like __add__(self,other) , __mul__(self,other)__ , etc. which allows for both the inputs to be Value objects or one of them to be Value and the other to be any numeric type.


How

The implementation was very basic and was implemented using the basic __pow__ and __rpow__ magic methods.


Testing

I tested with the following three scenarios and I have pasted the results as screenshots.

First scenario

a = Value(2.0)
b = Value(3.0)
c = a ** b
c.backward()

print(f'In a ** b :')
print(f'a : {a}')
print(f'b : {b}')
print(f'c : {c}')

Output

In a ** b :
a : Value(data=2.0, grad=12.0)
b : Value(data=3.0, grad=5.5452)
c : Value(data=8.0, grad=1)

Second scenario

a = 2.0
b = Value(3.0)
c = a ** b
c.backward()

print(f'In a ** b :')
print(f'a : {a}')
print(f'b : {b}')
print(f'c : {c}')

Output

In a ** b :
a : 2.0
b : Value(data=3.0, grad=5.5452)
c : Value(data=8.0, grad=1)

Third scenario

a = Value(2.0)
b = 3.0
c = a ** b
c.backward()

print(f'In a ** b :')
print(f'a : {a}')
print(f'b : {b}')
print(f'c : {c}')

Output

In a ** b :
a : Value(data=2.0, grad=12.0)
b : 3.0
c : Value(data=8.0, grad=1)