huggingface / optimum-neuron

Easy, fast and very cheap training and inference on AWS Trainium and Inferentia chips.
Apache License 2.0
177 stars 53 forks source link

LLaVA support #478

Open lifo9 opened 4 months ago

lifo9 commented 4 months ago

Feature request

The support is already present in huggingface/transformers.

But when I try to export LLaVA model to neuron format, it throws the following error:

optimum-cli export neuron --model liuhaotian/llava-v1.6-vicuna-7b --disable-validation /llava/
File "/usr/local/lib/python3.10/dist-packages/optimum/exporters/tasks.py", line 1140, in get_supported_tasks_for_model_type
    raise KeyError(
KeyError: "llava is not supported yet for transformers. Only ['audio-spectrogram-transformer', 'albert', 'bart', 'beit', 'bert', 'blenderbot', 'blenderbot-small', 'bloom', 'camembert', 'clip', 'clip-text-model', 'clip-text-with-projection', 'codegen', 'convbert', 'convnext', 'convnextv2', 'cvt', 'data2vec-text', 'data2vec-vision', 'data2vec-audio', 'deberta', 'deberta-v2', 'deit', 'detr', 'distilbert', 'donut', 'donut-swin', 'dpt', 'electra', 'encoder-decoder', 'esm', 'falcon', 'flaubert', 'glpn', 'gpt2', 'gpt-bigcode', 'gptj', 'gpt-neo', 'gpt-neox', 'groupvit', 'hubert', 'ibert', 'imagegpt', 'layoutlm', 'layoutlmv3', 'lilt', 'levit', 'longt5', 'marian', 'mbart', 'mistral', 'mobilebert', 'mobilevit', 'mobilenet-v1', 'mobilenet-v2', 'mpnet', 'mpt', 'mt5', 'm2m-100', 'nystromformer', 'owlvit', 'opt', 'llama', 'pegasus', 'perceiver', 'phi', 'pix2struct', 'poolformer', 'regnet', 'resnet', 'default-timm-config', 'roberta', 'roformer', 'sam', 'segformer', 'sentence-transformers-clip', 'sentence-transformers-transformer', 'sew', 'sew-d', 'speech-to-text', 'speecht5', 'splinter', 'squeezebert', 'swin', 'swin2sr', 't5', 'trocr', 'unet', 'unispeech', 'unispeech-sat', 'vae-encoder', 'vae-decoder', 'vision-encoder-decoder', 'vit', 'wavlm', 'wav2vec2', 'wav2vec2-conformer', 'whisper', 'xlm', 'xlm-roberta', 'yolos'] are supported. If you want to support llava please propose a PR or open up an issue."

Motivation

I'd like to run LLaVa on AWS Inferentia.

Your contribution

I can help with testing the eventual implementation.

lifo9 commented 3 months ago

Bump

cszhz commented 2 months ago

+1

swy-bys commented 1 month ago

It would be great to add a VLM into the supported models.