huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
132.79k stars 26.46k forks source link

ValueError: Unrecognized configuration class <class 'transformers.models.idefics2.configuration_idefics2.Idefics2Config'> for this kind of AutoModel: AutoModelForCausalLM. #30452

Closed KaifAhmad1 closed 5 months ago

KaifAhmad1 commented 5 months ago

System Info

Cuda : 12.1 OS : Windows x64 pip : 24.0 python : 3.10.10 transformers : 4.40.0 bitsandbytes: 0.43.1

Who can help?

Hey, there @younesbelkada , @amyeroberts I am getting this exception when Quantizing IDEFICS-2 for custom fine tuning!

Information

Tasks

Reproduction

model_name = 'HuggingFaceM4/idefics2-8b'

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
)

model = AutoModelForCausalLM.from_pretrained(
    model_name,
    quantization_config=bnb_config,
    device_map="auto",
    trust_remote_code=True,
    low_cpu_mem_usage=True
)
config.json: 100%
 684/684 [00:00<00:00, 24.9kB/s]
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
[<ipython-input-8-b3dcaae6e5bf>](https://localhost:8080/#) in <cell line: 1>()
----> 1 model = AutoModelForCausalLM.from_pretrained(
      2     model_name,
      3     quantization_config=bnb_config,
      4     device_map="auto",
      5     trust_remote_code=True,

[/usr/local/lib/python3.10/dist-packages/transformers/models/auto/auto_factory.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
    564                 pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs
    565             )
--> 566         raise ValueError(
    567             f"Unrecognized configuration class {config.__class__} for this kind of AutoModel: {cls.__name__}.\n"
    568             f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapping.keys())}."

ValueError: Unrecognized configuration class <class 'transformers.models.idefics2.configuration_idefics2.Idefics2Config'> for this kind of AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig, BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, LlamaConfig, CodeGenConfig, CohereConfig, CpmAntConfig, CTRLConfig, Data2VecTextConfig, DbrxConfig, ElectraConfig, ErnieConfig, FalconConfig, FuyuConfig, GemmaConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig, GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, JambaConfig, LlamaConfig, MambaConfig, MarianConfig, MBartConfig, MegaConfig, MegatronBertConfig, MistralConfig, MixtralConfig, MptConfig, MusicgenConfig, MusicgenMelodyConfig, MvpConfig, OlmoConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PersimmonConfig, PhiConfig, PLBartConfig, ProphetNetConfig, QDQBertConfig, Qwen2Config, Qwen2MoeConfig, RecurrentGemmaConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, RwkvConfig, Speech2Text2Config, StableLmConfig, Starcoder2Config, TransfoXLConfig, TrOCRConfig, WhisperConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig.

Expected behavior

It will run without raising any error!

amyeroberts commented 5 months ago

Hi @KaifAhmad1, idefics2 can be loaded using AutoModelForVision2Seq

KaifAhmad1 commented 5 months ago

Hi @amyeroberts, any tips for optimizing IDEFICS-2 on Tesla T4? Tried unsloth library but it doesn't support Multimodal LLMs. Any other ideas?

amyeroberts commented 5 months ago

I've only used the model on A10G, so don't know about Tesla T4. This is a question best placed in our forums. We try to reserve the github issues for feature requests and bug reports.

I'd suggest opening an feature request to support more modalities in unsloth - they do great work and it would surely be very impactful for all users!

I don't know too much about the ins-and-outs of what unsloth does. Some of the techniques for finetuning models e.g. LoRA are also available through Hugging Face's peft library.

If just for inference, there's an accelerate guide on using large models here and quantization in transformers here

KaifAhmad1 commented 5 months ago

Thanks! @amyeroberts