-
I was trying to train IDM VTON on VITON-HD dataset and ran into this huge error (followed instructions to set up ip adapter as in README.md)
```
➜ sh ./train_xl.sh
The following values were not p…
-
Hi Zheng,
python prepare_sketch.py
UserWarning: Mapping deprecated model name vit_huge_patch14_224_clip_laion2b to current vit_huge_patch14_clip_224.laion2b.
RuntimeError: Hugging Face hub mo…
-
您好,我之前写了一段代码,在一份多模态二分类数据集上训练clip去进行预测,我用这句代码区加载模型:
clip_model = CLIPModel.from_pretrained("/hy-tmp/clip-vit-base-patch32/")
processor = CLIPProcessor.from_pretrained("/hy-tmp/clip-vit-base-patch32…
-
Hello, this paper work is very excellent and I'm interested in the experimental part. So I want to know what configuration environment you used to complete the experiment。If you can tell me, I will th…
-
Hi, I encountered the following error message when running SAM. I'm not quite sure if it is because I put the model checkpoint sam_vit_h_4b8939.pth into the segment-anything folder. Hope to hear some …
-
PyTorch 2.0 has introduced `torch.compile` for accelerating training and inference. I have tried it on top of flash attention but unfortunately `torch` seems to unable to compile flash attention:
`…
-
I found a statement which said **3. Better support for vision transformers.** in link https://nvidia.github.io/TensorRT-Model-Optimizer/guides/_onnx_quantization.html.
I'm working on quantizing VIT n…
-
> C:\Users\Qatrol\Downloads\game\open-oasis\generate.py:17: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implic…
-
### Branch
main branch (mmpretrain version)
### Describe the bug
I am following https://mmpretrain.readthedocs.io/en/dev/papers/replknet.html
```
import torch
from mmpretrain import get_model
…
-
(clip4str) root@Lab-PC:/workspace/Project/OCR/CLIP4STR# bash scripts/vl4str_base.sh
abs_root: /home/shuai
model:
_convert_: all
img_size:
- 224
- 224
max_label_length: 25
charset_t…