issues
search
warner-benjamin
/
optimi
Fast, Modern, Memory Efficient, and Low Precision PyTorch Optimizers
https://optimi.benjaminwarner.dev
MIT License
53
stars
2
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Casting Existing FP32/FP16 model weights to BF16 + Kahan Summation
#7
zaptrem
opened
1 month ago
0
[FeatureRequest]gradient-descent-the-ultimate-optimizer
#6
sdbds
opened
2 months ago
0
When using accelerator with Gradient Release hook, Increasing VRAM consumption
#5
sdbds
opened
2 months ago
2
Document error
#4
sdbds
closed
2 months ago
1
Add Optimizer Accumulation
#3
warner-benjamin
closed
6 months ago
0
Add Support for Gradient Release
#2
warner-benjamin
closed
6 months ago
0
Version 0.1.0 is gone from PyPi
#1
mensfeld
closed
9 months ago
2