-
In the recently released [segmentation notebook](https://github.com/facebookresearch/dinov2/blob/main/notebooks/semantic_segmentation.ipynb), a trained Mask2Former segmenter is loaded. In its strucutr…
-
## Bug Description
## To Reproduce
Minimal reproducible code:
```python
import torch
import torch_tensorrt
from transformers import ViTForImageClassification
model = ViTForImageClass…
-
How can I obtain the code of HM-VIT ?
-
Any idea how to solve this? I am clueless. Thanks!
(midas-py310) C:\Midas\MiDaS>python run.py --model_type dpt_next_vit_large_384 --input_path "C:\Midas\MiDaS\input" --output_path "C:\Midas\MiDaS\o…
-
你不考虑增加vits支持功能吗
-
## Problem
For same input image , I get different output of the visual embedding, and this could make the result a little bit worse than original model.
### env
tensorrt-llm 0.9.0, GPU: A10
mo…
-
Hi, thank you for your nice paper.
What is the original paper of ViT(T) and ViT(S)? In my opinion, this is unclear both in the code and in your paper.
Is it from [1]?
Thank you.
[1] Hugo T…
-
Hi, I can see there are various encoder class configs including a ViT-mini. Will you be releasing one in the future?
-
给我报错OSError: Can't load tokenizer for 'openai/clip-vit-large-patch14'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name…
-
运行命令出错了:
`python inference.py --cfg configs/my_infer.yaml `
```
Traceback (most recent call last):
File "/root/UniAnimate/utils/registry.py", line 67, in build_from_config
return req_ty…