issues
search
facebookresearch
/
schedule_free
Schedule-Free Optimization in PyTorch
Apache License 2.0
1.9k
stars
64
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Is there any "adamw" style wrap_schedulefree?
#51
And233
opened
4 days ago
0
Modernization
#49
adefazio
closed
2 weeks ago
0
How to place the optimizer in eval mode when storing checkpoints?
#48
lyndonlauder
closed
3 weeks ago
1
LR and weight decay joint tuning
#47
EIFY
closed
2 weeks ago
4
[FeatureRequest]Add AdEMAMixScheduleFree
#46
sdbds
opened
1 month ago
3
Decoupled Weight Decay?
#45
madman404
closed
2 months ago
1
batchnorm update
#44
gattia
closed
2 months ago
1
Potentially breaking DDP training
#43
irowberryFS
closed
2 months ago
4
Update tensor device on train and eval function
#42
zhulinchng
closed
3 months ago
1
Update tensor device on train and eval function
#41
zhulinchng
closed
3 months ago
2
8bit optimizers
#40
winglian
closed
2 months ago
1
Warmup when restarting
#39
Fr0do
closed
4 months ago
1
Where should beta2 bias correction be incorporated?
#38
jondeuce
closed
2 weeks ago
2
How to combine this with Mechanic?
#37
Yura52
closed
2 weeks ago
2
Wrapper Version
#36
adefazio
closed
4 months ago
7
Add type hints
#35
stecklin
closed
4 months ago
1
Missing type hints (for CLI usage)
#34
stecklin
closed
4 months ago
2
How do I retrive current LR from the optimizer now?
#33
pkpro
closed
4 months ago
1
When saving/loading optimizer using state_dict(), a mismatch between state["z"] and param (self.state[p]) will occur.
#32
poutyface
closed
5 months ago
4
Guidelines for Learning Rates
#31
MarkWijkhuizen
closed
4 months ago
2
Doesnot work with torch.compile
#30
yuanzhi-zhu
closed
2 weeks ago
2
Reference AdamW?
#29
neosr-project
closed
5 months ago
1
Foreach detection
#28
adefazio
closed
5 months ago
0
Regular AdamW scores just as well on MNIST? + Feature request...
#27
drscotthawley
closed
5 months ago
2
AttributeError: module 'torch' has no attribute '_foreach_lerp_'
#26
patriksimurka
closed
5 months ago
1
Paper release
#25
adefazio
closed
5 months ago
0
Compatibility with MuP (Maximal Update Parametrization)
#24
simonguozirui
closed
3 months ago
2
Typo in paper?
#23
ad8e
closed
5 months ago
3
Weights before saving are different from loaded weights while training with AdamWScheduleFree
#22
talrub
closed
6 months ago
4
Can this be tested in kohya_ss
#21
rafstahelin
closed
5 months ago
3
Fixing typos here and there
#20
konstmish
closed
7 months ago
2
Foreach fix
#19
adefazio
closed
7 months ago
0
bug going from 1.2.1 to 1.2.2
#18
MarcoForte
closed
7 months ago
2
Fix typo in SGDScheduleFree docstring
#17
boisgera
closed
7 months ago
2
Retrieve LR
#16
bhack
closed
7 months ago
3
Does saved parameters need adjusted?
#15
FykAikawa
closed
7 months ago
4
ERROR: Could not build wheels for schedulefree
#14
talrub
closed
2 months ago
1
Delete schedulefree/__pycache__ directory
#13
bryant1410
closed
7 months ago
1
Non-closure Prodigy?
#12
madman404
opened
7 months ago
4
batchnorm and intermittent evals
#11
sniklaus
closed
7 months ago
4
Add foreach to optimizers
#10
drhead
closed
7 months ago
5
Minimal harness for running CIFAR10 experiments
#9
nelaturuharsha
opened
7 months ago
4
Usage with Torch's Autocast and GradScaler
#8
fhahlbohm
closed
6 months ago
7
Update README.md
#7
eltociear
closed
7 months ago
1
Tracking effective learning rate
#6
drhead
closed
7 months ago
2
[BUG]ZeroDivisionError: float division by zero
#5
sdbds
closed
7 months ago
6
fix small typo, .val() -> .eval()
#4
mrdbourke
closed
7 months ago
3
DeepSpeed support?
#3
catid
closed
7 months ago
2
nit: grammar
#2
BasedLukas
closed
7 months ago
1
Add `examples` folder, `mnist` example
#1
tfaod
closed
7 months ago
0
Next