-
### Description
Check the reason for changing gradient color from RGB to RGBA.
because that opacity is not working anymore
### Step-by-step reproduction instructions
_No response_
### Screenshot…
-
Great work! When I was taking a look at your code and your example, I saw no mention of mixed precision. Does the current implementation of SAM and GSAM support torch.cuda.amp training ?
-
## 🚀 Feature
A callback hook for `on_after_optimizer_step`.
### Motivation
There's a callback hook for `on_before_optimizer_step`, but not for `on_after_optimizer_step`.
That would be useful…
-
## 🐛 Bug
Hi,
While working on some CTC extensions, I noticed that torch's CTCLoss was computing incorrect gradient. At least when using CPU (I have not tested on GPU yet). I observed this problem …
-
### Use case
Currently one just can implement animated (monochrome/colored) icons by variable fonts using something like,
1. Download https://github.com/Typogram/Anicons/raw/master/webfonts/Anic…
-
A fantastic repository, thank you.
I'm just getting started with it, but I just thought I'd reach out and ask if you'd accept a pull request that trains for human percetual quality rather than MAE …
-
# 🐛 Bug
After updating to version 1.5.0 I noticed large differences between results obtaned by running the same codes under v. 1.4.1 and under with v. 1.5.0. This happens only in the exact multit…
-
# Description
For my thesis project, I'm applying a novel Polyak-averaging approach to various reinforcement learning algorithms; the approach uses natural-gradient descent in order to estimate the…
-
GLODAPv2.2022 (https://www.glodap.info/index.php/merged-and-adjusted-data-product-v2-2022/ or gridded product: https://odv.awi.de/data/ocean/glodap-gridded-data/) for comparison against model years 19…
-
Trotz preconditioner matrix konvergiert der KSGFS langsamer als SGLD.
Der optimizer ist in ksgfs/optim.py, vllt könntest du auch einmal über das update schauen?
Vielen lieben Dank dir :)