-
Why does the error appear?Have anyone encountered this problem
-
File "/home/jovyan/test_inference/llama3-main/inference_DDP_table_description.py", line 1, in
from transformers import AutoTokenizer, AutoModelForCausalLM, Accelerator
ImportError: cannot imp…
-
**Description**
For Profanity and NSFW detection, the code uses a transformer pipeline under the hood. However, the device argument should be passed to use the best hardware accelerator.
**Why is …
-
### Model description
I know the transformers library has not included object tracking models in the past, but this one can either plug into any object detection model or be an end-to-end open world …
-
### Feature request
We wish to implement https://github.com/google-research/scenic/tree/main/scenic/projects/owl_vit
### Motivation
google's version of SAM
-
Hi Ben,
I'm getting this running the Comfy notebook on a A6000.
I tried a fresh install and same thing.
Thanks for your help!
---
Total VRAM 48677 MB, total RAM 45140 MB
pytorch vers…
-
### Description
Supervision contains the function `from_transformers` that includes the results of a Hugging Face transformer and converts it into `Detections.`
Up until now, we were recommendin…
-
I have been running the Swin_Transformer and VMamba models on the same A800 GPU, using same batch sizes and the COCO2017 detection dataset.
However, I've observed that VMamba performs at least 5 tim…
-
After waiting 10 minutes I get this message 🤷♂️
Due to a bug fix in https://github.com/huggingface/transformers/pull/28687 transcription using a multilingual Whisper will default to language detec…
-
我通过huggingface_hub 的镜像库下载了 table-transformer-detection模型到本地,使用TableTransformerForObjectDetection.from_pretrained调用时,走不到本地模型。 是不是 huggingface_hub 的 table-transformer-detection 模型有问题呢