issues
search
karpathy
/
micrograd
A tiny scalar-valued autograd engine and a neural net library on top of it with PyTorch-like API
MIT License
10.5k
stars
1.52k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
seems to be an issue with return statements in _radd and _rmul inside engine.py
#82
reachpalaniv
opened
15 hours ago
0
Another way of gradient backward backpropagation
#81
ickma2311
opened
1 month ago
2
Create SECURITY.md
#80
AmruthNavaneeth
opened
2 months ago
0
Readme ala cmacros
#79
tomaklutfu
closed
3 months ago
1
Question/Idea: Automatic Gradient Clearing
#78
AhmedThahir
closed
1 month ago
2
Is `y = x * x` an edge case?
#77
Anindyadeep
closed
4 months ago
0
radd
#76
ankannn10
opened
4 months ago
1
added tanh nonlinearity function to engine.py
#74
Naren219
opened
4 months ago
1
readable text representaton
#73
erenkotar
opened
5 months ago
0
For addition adding incrementing grading makes sense, I can't make sense out of the incrementing it for multiplication too, potential bug?
#72
srik-git
opened
5 months ago
1
Fix inheritance bug
#71
dhern023
opened
5 months ago
0
Edit __pow()__ to calculate the derivation of both Exponent and base.
#70
Yara97Mansour
opened
6 months ago
0
type annotation lacking/ maybe also add docstrings.
#69
BartDenOuden
opened
6 months ago
0
backward member implementation question
#68
strisys
opened
6 months ago
1
Topological sort - bug
#67
gordicaleksa
opened
7 months ago
4
insert something
#66
jawad1git
closed
6 months ago
2
Adjusting parameters by sign and magnitude of gradient
#65
kippsoftware
opened
7 months ago
1
Simplified backpropagation removing toposort
#64
jcarlosroldan
closed
7 months ago
2
Fixing tiny issue
#63
bit-soham
opened
8 months ago
2
Demonstrate how to add JIT using MLIR to micrograd
#62
fzakaria
opened
8 months ago
0
Update README.md to add notebook paths.
#61
biranyucel
opened
8 months ago
0
Reseting the grad of weights and biases is not enough
#60
zurtal
closed
10 months ago
1
Video lecture companion notebook
#59
sanjeev3
closed
10 months ago
0
Fix - Sub operation backward method
#58
Vamsi995
closed
10 months ago
0
Regarding the gradient update of the __sub__ operation
#57
Vamsi995
closed
10 months ago
1
Fix __rsub and __rtruediv
#56
adubovkin
opened
11 months ago
1
Big speedup
#55
panzi
opened
1 year ago
1
Ensure backward() idempotence
#54
eggsyntax
closed
1 year ago
1
Ensure backward() is idempotent
#53
eggsyntax
closed
1 year ago
4
Another MiniGrad with the RAdam optimizer.
#52
haixu-qin
opened
1 year ago
0
Add CI
#51
tekknolagi
closed
1 year ago
1
Vectorized modification with GPU support.
#50
rohit-krish
opened
1 year ago
0
Add missing import in README
#49
tekknolagi
opened
1 year ago
0
improve readability
#48
preveen-stack
opened
1 year ago
0
Transformer nn methods
#47
MihailoMilenkovic
closed
1 year ago
1
(nit) correct function indentation
#46
dmitris
closed
1 year ago
1
Added functional, simple math
#45
jcallaham
closed
1 year ago
1
Vectorized implementation with PyTorch flavor
#44
conscell
opened
1 year ago
0
housekeeping/gitignore
#43
NickDrew
opened
1 year ago
0
Fixed negative ** Value
#42
Weikang01
closed
1 year ago
0
Add support for Value ** Value
#41
Weikang01
opened
1 year ago
4
Rename engine.py to value.py
#40
PopovMP
opened
1 year ago
0
Avoid unnecessary __radd__ and __rmul__ calls
#39
conscell
opened
1 year ago
0
gitignore pycache
#38
NickDrew
closed
1 year ago
0
A tensor version for micrograd inpired by this work
#37
hkxIron
opened
1 year ago
0
update pow and rpow
#36
arijit-hub
opened
1 year ago
0
Update __pow__() and __rpow__() method
#35
arijit-hub
closed
1 year ago
0
simplify back propagation (remove unnecessary sequence traversal)
#34
tf318
opened
1 year ago
1
Dev
#33
abahnasy
closed
1 year ago
0
Zero_grad only zeros the weight and bias nodes, not the nodes for addition and multiplication
#32
Tamulur
closed
1 year ago
1
Next