-
awq config
```
base:
seed: &seed 42
model:
type: Qwen2
path: /models/Qwen2-7B-Instruct
tokenizer_mode: slow
torch_dtype: auto
calib:
name: pileval
download: Fals…
-
I have been doing some tests with the flux model. I noticed there is an issue if I have the clip and vae assigned to the cpu. Is there a chance we can get the tiled decode and encode nodes to suppor…
-
Hello author, when I wanted to directly use your weight file for inference verification, I found that the weight file you provided and the model were inconsistent, why is this?An error is as follows: …
-
Hi,
I try to load the pre-trained models for inference:
CT_CLIP_zeroshot.pt,
CT_LiPro.pt
I run:
run_zero_shot.py with these parameters:
clip.load(r"C:\Users\pretrained_models\CT_CLIP_zeroshot…
-
I am trying to fine tune the clip model (clip-ViT-B-32-multilingual-v1). Is there example about training it with layers frozen? Also, can I train only the text encoder without modifying the image enco…
-
I tried to run the example notebook but got this error for CLIP model.
-
### Guidelines
- [X] I checked for duplicate bug reports
- [X] I tried to find a way to reproduce the bug
### Version
Main (Production)
### What happened? What did you expect to happen?
…
-
### Expected Behavior
Using the Long Clip text encoder does not function as intended since the ClipTextEncoderFlux does not assign a key to clip_l so it can't be extracted, unlike with the SD3 Clip T…
-
Related to #68. Decided to bake the model weights into the image by downloading the weights during build time:
```py
import modal
import os
import requests
import io
import subprocess
app = m…
-
How can the CLIP model from OpenAI, which is in English, also support Chinese at the same time?