-
When I run Python gradio_canny2image.py, keyerror: no clip_txt_model appears. How should I download this
-
Thanks for the amazing work. I am now trying to run the _train_adv_img_trans.py code and get an unexpected segmentation error. After locating the error location, I believe it is from the `clip_model.e…
-
Dear author,
Thanks so much for the great contribution to the community, in recent SD benchmark models, they often mention the subject fidelity using CLIP-I and DINO, for prompt fidelity they used …
-
I tried two gguf conversion on M2 ultra (metal) but no luck. I converted them myself and still the same error.
Here is the first model I tried:
https://huggingface.co/guinmoon/MobileVLM-1.7B-GGUF…
-
### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
In both UIs A1111 and Forge in **Opti…
-
Hello again!
Would it be possible to modify the GMP fine tune script to train a LoRA with PEFT for the CLIP VIT-G model? Then merge the LoRA with the model to get a new CLIP-G model?
Chat-GPT se…
-
### Model description
[jinaai/jina-clip-v1](https://huggingface.co/jinaai/jina-clip-v1/tree/main/onnx)
### Prerequisites
- [X] The model is supported in Transformers (i.e., listed [here](https://hu…
-
I currently have this issue, don't know what is causing it?
ComfyUI: 2324[537f35](2024-07-02)
Manager: V2.44.1
Traceback (most recent call last):
File "H:\ComfyUI\nodes.py", line 1906, in load…
-
Thank you for your work, I have some questions and hope you can answer them despite your busy schedule
What is CLIP upper bound?How did you get the model?
We consider three main groups of baseline…
-
```
import torch
import copy
import numpy as np
from transformers import CLIPProcessor, CLIPModel
from diffusers import StableDiffusionPipeline
from scipy.cluster.vq import vq, kmeans2
# Konf…