-
Hello,
I am very new to HuggingFace and machine learning in general. I understand that the Blip model is not supported for conversion to coreml. Can this be added to this repo? If not, Is there a …
-
Following models are not yet supported in addition to `NOT_SUPPORTED_BY_BB_MODELS` mentioned in #29 due to various reasons.
```python
SUPPORTED_BUT_FAILED_BY_WB_MODELS = {
# "convformer": "Cann…
-
./lib/train/data/loader.py:87: UserWarning: An output with one or more elements was resized since it had shape [1572864], which does not match the required output shape [1, 96, 128, 128]. This behavio…
-
代码:model = AutoModelForSequenceClassification.from_pretrained(checkpoint)
运行报错缺少配置文件
ValueError: Unrecognized model in XXX. **Should have a `model_type` key in its config.json**, or contain one of t…
-
Hi,
I trained your code on Imagenet-1k from scratch with your config file (mobilevit-small) with only one change: a new batch size of 32/GPU with an effective batch size of 32*4. I get top-1 accurac…
-
I got this error and I tried to solve it by install vision-transformer, but still show me same error
`pip install vision-transformer-pytorch`
-
Hi, I ma working on using vision transformers not only the vanilla ViT, but different models on UMDAA2 data set, this data set has an image resolution of 128*128 would it be better to transform the im…
-
Hello,
I'm trying to create VIT-tiny model, which is mentioned [here](https://github.com/apple/ml-cvnets/blob/main/docs/source/en/general/README-model-zoo.md).
My approach is:
- I downloaded this r…
-
进行与[pytorch feature extract test](https://github.com/pytorch/vision/blob/main/test/test_backbone_utils.py)对应的测试的时候有一些模型挂掉了,列举如下,其中标记了**的是暂时不需要做任何处理的。
| 原因 | 文件名称 | 备注 |
| ----------- | ------…
-
My server cannot connect to the Hugging Face website, so I manually downloaded the pretrained model used in the code and placed it in the `img2img-turbo-main` folder. After executing the command `pyth…