-
I want to replace the clip model weights with Tinyclip model weights to initialize, how should I change the network architecture?
-
Hello! I'm very interested in your great work! I have two questions about pretraining.
Does the generalization ability of UMT come from CLIP? With this in mind, regardless of what kind of pre-traini…
-
I set the model path to a custom path by changing the extra_model_paths.yaml
```
comfyui:
base_path: /custom/path/
checkpoints: models/checkpoints/
clip: models/clip/
clip_vi…
-
import torch
import sys
sys.path.append('./')
from videollama2.conversation import conv_templates, SeparatorStyle
from videollama2.constants import DEFAULT_MMODAL_TOKEN, MMODAL_TOKEN_INDEX
from…
-
Hi Zheng,
python prepare_sketch.py
UserWarning: Mapping deprecated model name vit_huge_patch14_224_clip_laion2b to current vit_huge_patch14_clip_224.laion2b.
RuntimeError: Hugging Face hub mo…
-
Hello. Thank you for making this project, it is very interesting.
However, I'm having a bit of trouble getting started on using it.
I cloned the repo in the custom_nodes and I was able to run pa…
-
I've been experiencing a consistent issue with super-resolution results where the boundaries between separate tiles are clearly visible. Despite following various preprocessing steps and using differe…
-
我在进行第6步推理时,出现问题:
```
(clip)XXX@junqian-Tower-X:~/projects/python/CLIP-TPU$ python3 embeddings_bmcv.py --img_dir ./datasets/imagenet_val_1k --image_model ./models/BM1684X/clip_image_vitb32_bm1684x_f1…
-
Hi, I would like to continue your pre-training.
Is there a readme that describes how to load in your pre-trained Long-CLIP model checkpoint? Looking for a checkpoint from "ViT-L/14@336px".
Than…
-
Nice work! I have a question, why should we concat these two features(LLMs and CLIP) instead of just using LLMs' features, as some other works have done: https://github.com/Kwai-Kolors/Kolors