-
(clip4str) root@Lab-PC:/workspace/Project/OCR/CLIP4STR# bash scripts/vl4str_base.sh
abs_root: /home/shuai
model:
_convert_: all
img_size:
- 224
- 224
max_label_length: 25
charset_t…
-
先按照[#11](https://github.com/Fanghua-Yu/SUPIR/issues/11)中[MatthewK78](https://github.com/MatthewK78)的说明修改requirements.txt,然后根据存放模型的路径修改CKPT_PTH.py和SUPIR_v0.yaml文件,例如我要将模型存放在项目根目录下新建的models文件夹内:
修改CKPT…
-
当我参考readme.md文件中的指示使用命令python demo/image_demo.py .//images//xixi_33.tif configs/rsprompter/samseg-maskrcnn-whu.py --weights ./checkpoint\sam-vit-base/pytorch_model.bin --out-dir ./output
控制台报错…
-
## 🐛 Bug
We're trying to privately fine-tune a ViT B/16 model ([link](https://github.com/mlfoundations/open_clip/tree/main)) with CIFAR-10 data. The non-private version uses `MultiHeadAttention` wh…
-
Thank you for your efforts, but I have a question about MAE code.
https://github.com/lucidrains/vit-pytorch/blob/dc57c75478c98241fd232a64a7bb4c23c5861730/vit_pytorch/mae.py#L91
MSE loss was ca…
-
Hi, I have noticed that the model format of vit-victim-b16-s650m in your [Google Drive](https://drive.google.com/drive/folders/1-bGX-NQOh6MuRPoXJgYHb9-jWRJvviSg) is not a bin.gz file. I would like to …
-
**code:**
query = 'What does the picture show?'
image_paths = ['/home/downloads/test.jpg']
huatuogpt_vision_model_path = "/home/llm_models/HuatuoGPT-Vision-7B"
from cli import HuatuoChatbot
b…
-
模型ViT-H/14 单机版A6000训练以及部署改参数后脚本
##训练脚本
#!/usr/bin/env
# Guide:
# This script supports distributed training on multi-gpu workers (as well as single-worker training).
# Please set the options …
-
### 分支
main 分支 (mmpretrain 版本)
### 描述该错误
在跑 DINOv2 的例子 https://github.com/open-mmlab/mmpretrain/tree/main/configs/dinov2
```python3
import torch
from mmpretrain import get_model
mode…
-
2:4 sparisty is only supported on Ampere+ , we've only run benchmarks with A100s, but Phil (@philipbutler) has access to consumer GPUs that could also take advantage of sparse acceleration as well.
…
jcaip updated
4 months ago