huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
134.91k stars 26.99k forks source link

Error loading model with device_map="auto" for AutoModelForVisualQuestionAnswering in visual-question-answering pipeline #34681

Open chakravarthik27 opened 2 days ago

chakravarthik27 commented 2 days ago

System Info

Who can help?

@Rocketknight1

Information

Tasks

Reproduction

from transformers import pipeline

pipe = pipeline("visual-question-answering", model=path, device_map="auto")

Expected behavior

it should be support for all type of models under visual-question-answering pipeline

chakravarthik27 commented 2 days ago
ValueError: Could not load model Salesforce/blip-vqa-base with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForVisualQuestionAnswering'>, <class 'transformers.models.blip.modeling_blip.BlipForQuestionAnswering'>). See the original errors:

while loading with AutoModelForVisualQuestionAnswering, an error is thrown:
Traceback (most recent call last):
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\pipelines\base.py", line 286, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\models\auto\auto_factory.py", line 564, in from_pretrained
    return model_class.from_pretrained(
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\modeling_utils.py", line 3875, in from_pretrained
    no_split_modules = model._get_no_split_modules(device_map)
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\modeling_utils.py", line 1979, in _get_no_split_modules
    raise ValueError(
ValueError: BlipForQuestionAnswering does not support `device_map='auto'`. To implement support, the model class needs to implement the `_no_split_modules` attribute.

while loading with BlipForQuestionAnswering, an error is thrown:
Traceback (most recent call last):
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\pipelines\base.py", line 286, in infer_framework_load_model
    model = model_class.from_pretrained(model, **kwargs)
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\modeling_utils.py", line 3875, in from_pretrained
    no_split_modules = model._get_no_split_modules(device_map)
  File "c:\Users\KALYAN\OneDrive\Documents\JSL_Developements\langtest_2.5.0\langtest\.venv\lib\site-packages\transformers\modeling_utils.py", line 1979, in _get_no_split_modules
    raise ValueError(
ValueError: BlipForQuestionAnswering does not support `device_map='auto'`. To implement support, the model class needs to implement the `_no_split_modules` attribute.