-
In original Insight Pytorch
```
self.optimizer_fr = optim.SGD([
{'params': paras_wo_bn + [self.head.kernel], 'weight_decay': 5e-4},
…
-
# 🚀 Feature Request
Support sweep parameters in pairs(triplets, etc) rather than the full combination of them
## Motivation
**Is your feature request related to a problem? Please describe.*…
-
I have seen multiple complaints (both in the survey and offline) that the optimize solver "is not robust". Likely the problem with this is that the default tolerance is quite high, and small floating …
-
It might be related to the vanishing/exploding gradient.
https://github.com/zixia/concise-chit-chat/blob/f837d352fbc70dbe83f08fd7876b0cac7f4ec65e/src/train.py#L143-L154
```
$ make train
PYTHON…
-
It seems that with the vista_phase2_stage1.yaml, all the param is trainingable.
I change the code in Vista/vwm/models/diffusion.py like this to test. And all the parameters are trainable.
```
def …
-
### Bug description
When resuming training from an end-of-epoch checkpoint, the global_step counter is incremented an additional time before training continues, suggesting that an additional traini…
-
Tested on Python 3.11
For the sake of your sanity, use [Busybox for Windows](https://frippery.org/files/busybox/busybox64u.exe) so that you have a normal, native shell environment instead of `Power…
-
```
What steps will reproduce the problem?
1. Wrap some code that contains WRITE statements with f2py
2. Try to capture this output on Python stdout / stderr descriptors
3. Watch yourself fail
What i…
-
### What should we add?
I would like to discuss improving the way bounds are handled and settle/implement such an improvment:
For variational algorithms, like `VQE`, that use an optimizer, while…
ElePT updated
9 months ago
-
```
What steps will reproduce the problem?
1. Wrap some code that contains WRITE statements with f2py
2. Try to capture this output on Python stdout / stderr descriptors
3. Watch yourself fail
What i…