-
## ❓ Questions and Help
I am training on:
TPU VM: V2-8
launched using: `xmp.spawn(main, args=(args,), start_method="fork")`
```
>>> import torch
>>> import torch_xla
>>> torch.__version__
'…
-
Owl2中使用的Vit-H-16是从哪个版本初始化的?
可以分享一下这个vit的初始版本权重吗
-
Very nice job! If I use it to custom SD model, did I train it use LAION 2B dataset said in your paper?
-
Going to go ahead and make an issue for SDXL, will be an obvious request here in the next couple weeks.
https://github.com/pharmapsychotic/clip-interrogator/blob/2cf03aaf6e704197fd0dae7c7f96aa59cf1…
-
Discussed in #1015.
- [x] https://github.com/MaartenGr/BERTopic
- [ ] https://github.com/pytorch/fairseq
- [ ] https://github.com/NVIDIA/NeMo
- [ ] https://github.com/pyannote/pyannote-audio
- …
-
Eg https://huggingface.co/laion/CLIP-ViT-H-14-frozen-xlm-roberta-large-laion5B-s13B-b90k/tree/main
Need more config, adapting the weights and also changing the model at https://github.com/huggingfa…
-
I encountered the following error when finetuning OpenCLIP on my own data:
```
File "./src/training/data.py", line 281, in group_by_keys_nothrow
fname, value = filesample["fname"], filesample["…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Simple as that, 1.5 embeddings have zero effect on XL mod…
Kidel updated
10 months ago
-
Hi! First of all thanks for releasing such a great model and accompanying paper. Could you clarify few design choices in the SDXL?
1. Why do you use both previous CLIP-L and new OpenCLIP ViT-bigG?…
-
Stable Diffusion 2.0 uses a new text encoder, so the PyTorch mapping for that model and any future models won't work any more. It's beyond my expertise, but can we write into clip_encoder.py the abili…