-
Version: Tensorflow 1.12 Stable.
When I run the code, it appears the error:
> x and y must have the same dtype, got tf.float32 != tf.resource
in "AdaBound.py", line 132, in _resource_apply_dense
…
-
```
# Applies bounds on actual learning rate
# lr_scheduler cannot affect final_lr, this is a workaround to apply lr decay
final_lr = group['final_lr'] * group['lr'] / base_lr`
```
However lr_sch…
-
Just a small-ish roadmap to different Optimizers and losses we can look at to add :
Optimizers:-
- [x] Adam
- [x] Adagrad
- [x] SGD
- [x] RMSprop
- [x] AdaDelta
~- [x] Riemann SGD~ Removed b…
-
Great thanks to your work!
![image](https://user-images.githubusercontent.com/16469472/53220243-752b1080-369e-11e9-93dc-ca3ee0018cbb.png)
The line with orange color is baseline using Adam as opt…
-
Looks promising https://arxiv.org/abs/2208.06677
-
具体效果我会在这里提交一份,如果我没忘记的话。
-
The provided new optimizer is sensitive on tiny batchsize (
-
could you please implement a keras version
-
I don't see any reason why this code would not run in a lower version of python.
Could you explain why is there such a requirement?
-
## 🚀 Feature
I would like to suggest new stochastic optimizer additions to Pytorch.
### For non-convex loss functions
It is known that adaptive stochastic optimizers like Adam, Adagrad, RMSprop c…