issues
search
zer0int
/
CLIP-fine-tune
Fine-tuning code for CLIP models
MIT License
166
stars
8
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
cant convert to hf with convert_clip_original_pytorch_to_hf.py
#17
betterftr
opened
2 weeks ago
0
I want to fine-tune a complete text encoder model, but it seems that the model trained by ft-B-train-OpenAI-CLIP-ViT-L-14.py is a visual encoder model.
#16
vxiaobai
opened
1 month ago
8
After fine-tune, how to correctly save the text encoder for use with StableDiffusionXLPipeline.from_pretrained?
#15
minienglish1
opened
1 month ago
11
Maybe it could have license
#14
LOLSALT
closed
1 month ago
1
param_group is wrong set?
#13
Johnson-yue
closed
1 month ago
1
Prompt-Tuning for text-to-image diffusion models (especially the CLIP text encoder)
#12
AHHHZ975
closed
2 months ago
2
loading weights with `clip.load()` throws `EOFError: Ran out of input`
#11
SkyLull
closed
2 months ago
2
Compute Cosine Similarity is different from openai_clip , why
#10
Johnson-yue
closed
2 months ago
0
compute matrix multiplication twice !
#9
Johnson-yue
closed
2 months ago
0
why do you not use norm when evaluated zero-shot ?
#8
Johnson-yue
closed
2 months ago
2
Question About Geometric Parametrization
#6
mat10599
closed
2 months ago
4
Instructions on how to use with huggingface/diffusers?
#5
voodoohop
closed
5 months ago
2
PEFT fine tune CLIP VIT-G?
#4
bash-j
opened
5 months ago
3
Train a CLIP that was saved into a SDXL model?
#3
bash-j
closed
5 months ago
4
error report
#2
T8mars
closed
5 months ago
1
CLIP-G Training?
#1
bash-j
closed
6 months ago
8