huggingface / optimum

🚀 Accelerate training and inference of 🤗 Transformers and 🤗 Diffusers with easy to use hardware optimization tools
https://huggingface.co/docs/optimum/main/
Apache License 2.0
2.44k stars 431 forks source link

OPTIMUM Onnx Exporter for openai/clip-vit-large-patch14 model #1955

Open antje2233 opened 1 month ago

antje2233 commented 1 month ago

Feature request

I wonder if the task text-classification can to be supported in the ONNX export for clip? Ich want to use the openai/clip-vit-large-path14 model for zero-shot image classification (classification of images without pretraining based on given candidate labels) but I get the following error: ValueError Traceback (most recent call last) File /home/danne00a/ZablageBlazeG/ZeroShotClassification/zeroshotclassifier.py:2 1 #%% ----> 2 ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True)

File ~/mambaforge/envs/ZeroShot_Mamba_env/lib/python3.11/site-packages/optimum/onnxruntime/modeling_ort.py:669, in ORTModel.from_pretrained(cls, model_id, export, force_download, use_auth_token, cache_dir, subfolder, config, local_files_only, provider, session_options, provider_options, use_io_binding, kwargs) 620 @classmethod 621 @add_start_docstrings(FROM_PRETRAINED_START_DOCSTRING) 622 def from_pretrained( (...) 636 kwargs, 637 ): 638 """ 639 provider (str, defaults to "CPUExecutionProvider"): 640 ONNX Runtime provider to use for loading the model. See https://onnxruntime.ai/docs/execution-providers/ for (...) 667 ORTModel: The loaded ORTModel model. 668 """ --> 669 return super().from_pretrained( 670 model_id, 671 export=export, 672 force_download=force_download, 673 use_auth_token=use_auth_token, 674 cache_dir=cache_dir, ... 274 ) 276 # TODO: Fix in Transformers so that SdpaAttention class can be exported to ONNX. attn_implementation is introduced in Transformers 4.36. 277 if model_type in SDPA_ARCHS_ONNX_EXPORT_NOT_SUPPORTED and _transformers_version >= version.parse("4.35.99"):

ValueError: Asked to export a clip model for the task text-classification, but the Optimum ONNX exporter only supports the tasks feature-extraction, zero-shot-image-classification for clip. Please use a supported task. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the task text-classification to be supported in the ONNX export for clip.

Motivation

I'm struggling with the sioze of the openai/clip-vit-large-patch14 model, thus I want to convert it to OPTIMUM onnx!

Your contribution

no ideas so far..

fxmarty commented 1 month ago

Hi @antje2233, which command are you running? optimum-cli export onnx --model openai/clip-vit-large-patch14 clip_onnx --task zero-shot-image-classification works for me.

antje2233 commented 1 month ago

great, that works for me as well! Thanks a lot! I tried to export it inside th python code: ort_model = ORTModelForSequenceClassification.from_pretrained(model_checkpoint, export=True) which failed with the above mentioned error message..

antje2233 commented 1 month ago

but unfortunately. if I want't to apply the onnx model I get an error message that the required inputs are missung this is my code: #%% import onnxruntime
import numpy as np
from PIL import Image
import os

%%

Laden des ONNX-Modells

onnx_model_path = "./clip_onnx/model.onnx" # Pfad zum gespeicherten ONNX-Modell
ort_session = onnxruntime.InferenceSession(onnx_model_path)

%%

candidate_labels

candidate_labels = ["turbine part", "blade with spallation", "cooled turbine blade", "cooled turbine blade with spallation"]

%%

Verzeichnis mit Bildern

image_directory = "/home/danne00a/ZablageBlazeG/ZeroShotClassification/mixedparts_4_zeroShot_short"

%%

Liste zum Speichern der Bildpfade

image_paths = []

Durchlaufen des Verzeichnisses und Sammeln der Bildpfade

for file_name in os.listdir(image_directory):
if file_name.lower().endswith(".jpg") or file_name.lower().endswith(".jpeg") or file_name.lower().endswith(".png"):
image_path = os.path.join(image_directory, file_name)
image_paths.append(image_path)

%%

Schleife, um die Zero-Shot-Klassifizierung auf jedes Bild durchzuführen

for image_path in image_paths:
image = Image.open(image_path)

# Vorbereitung des Bildes für die Klassifizierung  
image = image.resize((224, 224))  # Beispielgröße an das Modell anpassen  
image_np = np.array(image).astype(np.float32)  # Typkonvertierung  
image_np = np.transpose(image_np, (2, 0, 1))  # Ändern der Reihenfolge der Achsen  
image_np = np.expand_dims(image_np, axis=0)  # Hinzufügen einer zusätzlichen Dimension  

# Ausführen der Inferenz mit dem geladenen ONNX-Modell  
ort_inputs = {ort_session.get_inputs()[0].name: image_np}  
ort_outs = ort_session.run(None, ort_inputs)  

# Verarbeiten der Ausgabe für die Klassifizierungsergebnisse  
scores = ort_outs[0][0]  
result = [{"score": score, "label": label} for score, label in sorted(zip(scores, candidate_labels), key=lambda x: -x[0])]  
print(result)  

%%

and this is the error message: File /home/danne00a/ZablageBlazeG/ZeroShotClassification/zeroshotclassifier_w_onnx.py:14 12 # Ausführen der Inferenz mit dem geladenen ONNX-Modell
13 ort_inputs = {ort_session.get_inputs()[0].name: image_np}
---> 14 ort_outs = ort_session.run(None, ort_inputs)
16 # Verarbeiten der Ausgabe für die Klassifizierungsergebnisse
17 scores = ort_outs[0][0]

File ~/mambaforge/envs/ZeroShot_Mamba_env/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:216, in Session.run(self, output_names, input_feed, run_options) 202 def run(self, output_names, input_feed, run_options=None): 203 """ 204 Compute the predictions. 205 (...) 214 sess.run([output_name], {input_name: x}) 215 """ --> 216 self._validate_input(list(input_feed.keys())) 217 if not output_names: 218 output_names = [output.name for output in self._outputs_meta]

File ~/mambaforge/envs/ZeroShot_Mamba_env/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:198, in Session._validate_input(self, feed_input_names) 196 missing_input_names.append(input.name) 197 if missing_input_names: --> 198 raise ValueError( 199 f"Required inputs ({missing_input_names}) are missing from input feed ({feed_input_names})." 200 )

ValueError: Required inputs (['pixel_values', 'attention_mask']) are missing from input feed (['input_ids']).

antje2233 commented 1 month ago

And how do you apply the onnx model afterwards? I get an error message that the required input values are missing...

Von: fxmarty @.> Gesendet: Dienstag, 16. Juli 2024 15:41 An: huggingface/optimum @.> Cc: Hagenacker, Antje @.>; Mention @.> Betreff: Re: [huggingface/optimum] OPTIMUM Onnx Exporter for openai/clip-vit-large-patch14 model (Issue #1955)

Sie erhalten nicht oft eine E-Mail von @.*** Erfahren Sie, warum dies wichtig isthttps://aka.ms/LearnAboutSenderIdentification

Hi @antje2233https://github.com/antje2233, which command are you running? optimum-cli export onnx --model openai/clip-vit-large-patch14 clip_onnx --task zero-shot-image-classification works for me.

- Reply to this email directly, view it on GitHubhttps://github.com/huggingface/optimum/issues/1955#issuecomment-2230925841, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BJ2FSCILOE7RAA4IJ6WZKPLZMUPIFAVCNFSM6AAAAABKYU6Q6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZQHEZDKOBUGE. You are receiving this because you were mentioned.Message ID: @.***>

fxmarty commented 1 month ago

@antje2233 Could you format code as follow: https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#quoting-code

antje2233 commented 1 month ago

But this is only formatting..I hope you can still read my code?

Von: fxmarty @.> Gesendet: Dienstag, 16. Juli 2024 17:27 An: huggingface/optimum @.> Cc: Hagenacker, Antje @.>; Mention @.> Betreff: Re: [huggingface/optimum] OPTIMUM Onnx Exporter for openai/clip-vit-large-patch14 model (Issue #1955)

Sie erhalten nicht oft eine E-Mail von @.*** Erfahren Sie, warum dies wichtig isthttps://aka.ms/LearnAboutSenderIdentification

@antje2233https://github.com/antje2233 Could you format code as follow: https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax#quoting-code

- Reply to this email directly, view it on GitHubhttps://github.com/huggingface/optimum/issues/1955#issuecomment-2231227992, or unsubscribehttps://github.com/notifications/unsubscribe-auth/BJ2FSCI5BCO6L75CHNZZXOLZMU3TLAVCNFSM6AAAAABKYU6Q6SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEMZRGIZDOOJZGI. You are receiving this because you were mentioned.Message ID: @.***>

fxmarty commented 1 month ago

No: https://github.com/huggingface/optimum/issues/1955#issuecomment-2231171067