-
DiffusionPrior is configured by default to predict_x_start. As a result, x_recon is not clamped to [-1, 1] which I think is good because we don't know what is the output range of CLIP image embeddings…
-
Greetings.
I found this repository and since I'm doing some AI Art stuff going on, I am looking for colab notebooks and pretrained models for this repo. I'd be thankful if there's any and you give a…
prp-e updated
2 years ago
-
AssertionError: Torch not compiled with CUDA enabled
87 # do above for many steps
89 dalle2 = DALLE2(
90 prior = diffusion_prior,
91 decoder = decoder
92 )
---…
-
When I run the code in README.md, it finally shown that "AttributeError: 'XClipAdapter' object has no attribute 'max_text_len'".
Any hint?
etali updated
2 years ago
-
Hi Phil,
when reading the `DiffusionPriorNetwork` forward part, I noticed the concated tokens feed into the CausalTransformer are composed like below:
https://github.com/lucidrains/DALLE2-pytorch/bl…
-
So, I've noticed a potential bug related to the exponential moving averaged priors (I haven't tested the other models).
Essentially, the EMA model seemingly refuses to "learn". I have to believe th…
nousr updated
2 years ago
-
Hi, I am running the following code:
```
import torch
from dalle2_pytorch import DALLE2, DiffusionPriorNetwork, DiffusionPrior, Unet, Decoder, OpenAIClipAdapter
# openai pretrained clip - defa…
-
Hi, me again (lol)
Just curious why set inited `text_encodings`'s length as 0
https://github.com/lucidrains/DALLE2-pytorch/blob/8b054686530c90ecd8e8db62eb9c648d189accf9/dalle2_pytorch/dalle2_pytor…
-
Hi all, I met a weird thing when I use your code to train my model. When I use the sampler to generate samples of train and test data like what are shown in https://wandb.ai/veldrovive/dalle2_train_d…
-
There seems to be a problem with the recently updated code about EMA with
**RuntimeError: Subtraction, the `-` operator, with two bool tensors is not supported. Use the `^` or `logical_xor()` operato…