-
Vulnerable Library - numpy-1.21.6-cp37-cp37m-manylinux_2_12_x86_64.manylinux2010_x86_64.whl
NumPy is the fundamental package for array computing with Python.
Library home page: https://files.pythonh…
-
![image](https://user-images.githubusercontent.com/51308183/178815193-7d039761-0518-481f-9290-27599d0908a5.png)
So, after yesterday I was reminded of some experiments I had done with kat a few mont…
nousr updated
2 years ago
-
After this PR https://github.com/pytorch/benchmark/pull/1261 change https://github.com/pytorch/benchmark/commit/94078d9288ff99ef50557a3bd87970badbef0458#diff-fcf1c48cd82709b9bd6dc7881e2a425d15502a7592…
-
Is there a suggested way of generating multiple image outputs given a text input, like the 9-10 outputs dalle2 produces?
-
### 🐛 Describe the bug
Hi, I am running a small sample of dot-product attention (Scale-Mask-Softmax-Dropout) with TorchDynamo + AoTAutograd enabled.
Here's a code snippet to reproduce the behavior…
-
When using on the fly image embedding generation, it hurts the quality of the embedding if the preprocessing has cropped the image to smaller than the clip input size. As an improvement to the decoder…
-
With small models there seems to be a large chance of the decoder loss becoming `nan` early in training. Using `autograd.detect_anomaly()` torch outputs the error `Function 'ExpBackward0' returned nan…
-
Why do we clamp only during sampling and not during training? Shouldn't they be matching? Please enlighten me.
https://github.com/lucidrains/DALLE2-pytorch/blob/main/dalle2_pytorch/dalle2_pytorch.p…
-
Hey! If we want to utilize pre-computed text/img embeddings and also text condition on the original caption, how do we go about that?
---
Currently in the code I see:
https://github.com/lucid…
nousr updated
2 years ago
-
Hi Phil,
This morning I tried to run the decoder training part. I decided to use `DecoderTrainer` but found one issue when ema update.
When after using decoder_trainer do sampling, the next tra…