issues
search
lessw2020
/
Ranger21
Ranger deep learning optimizer rewrite to use newest components
Apache License 2.0
323
stars
46
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
- Enable / Disable showing settings
#51
okbalefthanded
opened
9 months ago
0
link to https://github.com/kozistr/pytorch_optimizer
#50
wassname
opened
1 year ago
0
Error when using DDP
#49
smartbarbarian
opened
1 year ago
1
learning rate scheduler
#48
shyhyawJou
opened
1 year ago
0
Recommended settings for transformers?
#47
OhadRubin
opened
2 years ago
1
hi,please help me
#46
wudizuixiaosa
closed
2 years ago
0
Nice name of your project)
#45
Ranger21
opened
2 years ago
0
Use collections.abc.Callable
#44
JackKelly
closed
2 years ago
1
AttributeError: module 'collections' has no attribute 'Callable'
#43
JackKelly
closed
2 years ago
0
What it the best hyper-parameter setting?
#42
NoOneUST
opened
2 years ago
2
Gradient normalization lowers the maximum learning rate that can converge.
#41
Handagot
opened
2 years ago
0
Can ranger be used for NLP transformers?
#40
LifeIsStrange
closed
2 years ago
2
Not support pytorch _1.3.1
#39
huangnengCSU
opened
2 years ago
0
Require an documentation
#38
huangnengCSU
opened
2 years ago
2
decouple the lr scheduler and optimizer?
#36
hiyyg
opened
3 years ago
5
sample usage in fastai
#35
nikky4D
opened
3 years ago
1
local variable 'neg_grad_ma' referenced before assignment when momentum_type is not "pnm"
#34
lechmazur
closed
3 years ago
0
removed torch.linalg as LA mention
#33
pranshurastogi29
closed
3 years ago
0
allow parallel, division by zero error
#32
ryancinsight
closed
1 year ago
0
Allow parallel patch based training
#31
ryancinsight
closed
3 years ago
3
RuntimeError: hit nan for variance_normalized
#30
gcp
opened
3 years ago
7
About gradient normalization
#29
julightzhong10
opened
3 years ago
0
error in warmdown - lr below min lr. current lr = 2.999999999999997e-0518 [07:50<00:04, 4.66s/it] auto handling but please report issue!
#28
neuronflow
opened
3 years ago
2
Performance of ResNet50 on ImageNet
#27
juntang-zhuang
opened
3 years ago
1
Update README.md
#26
nestordemeure
closed
3 years ago
1
SAM paper
#25
ryanstout
opened
3 years ago
0
File "/home/.../site-packages/ranger21/ranger21.py", line 680, in step raise RuntimeError("hit nan for variance_normalized")
#24
neuronflow
closed
3 years ago
1
Activate/deactivate softplus for MADGRAD & choosing beta softplus
#23
TheZothen
closed
3 years ago
1
Some fixes when using MADGRAD (Softplus and Stable weight decay)
#22
TheZothen
closed
3 years ago
2
comparing ranger21 to SAM optimizer
#21
nikky4D
closed
3 years ago
2
error when training with batch_size = 1
#20
neuronflow
opened
3 years ago
0
hit nan for variance_normalized
#19
jimmiebtlr
closed
3 years ago
7
resuming training with ranger21?
#18
neuronflow
opened
3 years ago
3
Multi GPU problem
#17
zsgj-Xxx
opened
3 years ago
4
lr below min_lr check too aggressive
#16
kai-tub
opened
3 years ago
2
added reference to Ranger21 paper
#13
nestordemeure
closed
3 years ago
1
optimizer = Ranger21(params=model.parameters(), lr=learning_rate) File "/mnt/Drive1/florian/msblob/Ranger21/ranger21/ranger21.py", line 179, in __init__ self.total_iterations = num_epochs * num_batches_per_epoch TypeError: unsupported operand type(s) for *: 'NoneType' and 'NoneType'
#12
neuronflow
opened
3 years ago
6
adaptive gclipping: unable to process len of 5 - currently must be <= 4
#11
HungYu-Wu
closed
3 years ago
6
Adaptive Gradient Clipping
#10
benihime91
closed
3 years ago
3
Changes in lr
#9
zsgj-Xxx
closed
3 years ago
7
torch.grad removed in PyTorch 1.8.1?
#8
jszym
closed
3 years ago
2
Augmentation requests
#5
LifeIsStrange
closed
3 years ago
2
Example
#3
johnyquest7
closed
3 years ago
1
Package and repo reorg
#2
BrianPugh
closed
3 years ago
1
Adaptive Gradient Clipping
#1
kayuksel
closed
3 years ago
2