-
Run NVP_4 for 90 epochs with a lsdim of 500.
-
We should have most or all of the widely used optimizers in the ML literature:
- [ ] [Adadelta](https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html#torch.optim.Adadelta)
- [ ] [Ada…
-
### Feature request
I would like to propose the addition of a new learning rate scheduler that combines MultiStepLR with a warmup phase. Currently, the Transformers library does not include a sched…
-
### 🚀 Feature
Hello all,
I would like to add Adabound to the list of existing optimizers in the torch.optim module.
Here is the link to the paper - https://openreview.net/pdf?id=Bkg3g2R9FX
…
-
The metrics I used to test the 59th epoch with [iPhone](https://www.dropbox.com/sh/jalr5860ukesgvl/AAC2ozt00TgzWzSgFTRG-z8ma?dl=0) training were: Avg. LPIPS is 0.2057, PSNR is 21.9625 and SSIM is 0.71…
-
### Feature request
We try to propose the addition of a new and widely-adopted scheduler strategy for language model pretraining in the Transformers repository. Upon reviewing the current schedulers …
-
Thanks for your great work !
I am currently engaged in a project that involves the DiffuLoss, and I am curious about the convergence behavior of the training loss. Specifically, I would like to know …
-
Hi, many thanks for converting the tensorflow implementation of DCRNN to pytorch, it helped me a lot.
I noticed that when resuming a model, the learning-rate scheduling is being reset. Any idea wh…
-
@cocktailpeanut as evoked in another thread
--optimizer_args "relative_step=False" "scale_parameter=False" "warmup_init=False"
--lr_scheduler constant_with_warmup
**THIS SETTING IS ABSOLUTE C…
-
I have a series of photos:
https://drive.google.com/drive/folders/1ZZgZUrFrnP47rx8bN5K6yvYnSC50a-9G?usp=drive_link
Which were take with an iPhone 13 Pro Max
I have used this dataset with Instan…