-
Eg https://huggingface.co/laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-laion5B-s13B-b90k/tree/main
Need more config, adapting the weights and also changing the model at https://github.com/huggingfa…
-
## 🚀 Feature
It would be great to be able to pass a StreamingDataLoader to map. When experimenting with CLIP embeddings I've found that I needed to use StreamingDataLoader to be able to fully utili…
-
Hello,Thank you very much for your open-source work.
I have a question regarding the visualization of the loss generated while running train_sketch.py. Below is the visualization of the validation l…
-
Thanks for the great work!
The BLIP-2 paper mentions that the model is pre-trained on a combination of dataset, including COCO, Visual Genome, CC, SBU and LAION.
Looking at the provided [conf…
-
When I ran the example code in README.md, I met a strange problem.
```python
import scripts.control_utils as cu
import torch
from PIL import Image
path_to_config = 'configs/inference/sdxl/sdx…
-
Hello,
I noticed in your code that you have n_candidate_per_text to be set to 3 by default. I am wondering if that was used during the evaluation as it was not mentioned in the paper?
Additionally…
-
### Describe the bug
when running retrieve.py to produce class prior photo using laion datasset,, I run through below errors.
url is bad in the code below... could be the root cause?
```
client =…
-
Are there smaller image encoder for IP Adapter?
Official IP Adapter repository using laion/CLIP-ViT-H-14-laion2B-s32B-b79K, that is very large in size. Are there smaller and faster image encoder th…
-
-
Interesting work! When will you release the DALL-E2 code? Or give some implementation clues to ease readers to perform RIATIG on DALL-E 2.