-
if you could solve my problem, thanks a lot !
jzssz updated
5 months ago
-
Can I use my own model in.bin format,tks
-
Hey, I was trying for the most recent stable diffusion v2, and find only below changes make it run well.
**Describe alternatives you've considered**
In `sd.py`, from:
# 1. Load the auto…
-
Small discrepancy noticed between the 2 tokenizers:
- https://github.com/openai/CLIP uses special tokens in the form ``
- this repository uses special tokens of the form ``
Not a huge difference,…
-
Can we adapt openclip to be able to train text/text contrastive models?
And beyond that maybe, text/test/image models ?
use case:
* train pure contrastive text models either for multilingual pa…
-
i don't know what's in models/image_encoder?
-
Hi, thanks you support the codes of CLIPSelf, a very nice work!
Here is an error about:
main.py: error: argument --dataset-type: invalid choice: 'grid_distill' (choose from 'webdataset', 'csv', …
-
Hi,
In Table 9: Evaluation of frozen features on instance-level recognition. of the table, it shows the performance for OpenCLIP-G/14 is 50.7 for Oxford-M and 19.7 for Oxford-H. However, we only ge…
-
### What happened?
Huggingface is blocked.The only way I can get openclip model is from [https://www.modelscope.cn/models/AI-ModelScope/CLIP-ViT-H-14-laion2B-s32B-b79K/files].It's embedding is 1024.
…
-
准备复现ChineseClip论文,以CLIP-VIT-B/16 初始化image encoder部分,下载对应的是 https://huggingface.co/openai/clip-vit-base-patch16/tree/main 但是加载模型参数时,发现image encoder部分参数加载不上。我打印发现对应参数名称以vision_model.encoder.layers.开头…