huggingface / optimum-habana

Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
Apache License 2.0
143 stars 171 forks source link

CLIP contrastive image-text inference error #1179

Closed caijimin closed 1 month ago

caijimin commented 1 month ago

System Info

optimum-habana 1.16.2
docker vault.habana.ai/gaudi-docker/1.16.2/ubuntu22.04/habanalabs/pytorch-installer-2.2.2:latest

Information

Tasks

Reproduction

cd examples/contrastive-image-text

python run_clip.py --output_dir ./clip-roberta-finetuned --model_name_or_path openai/clip-vit-large-patch14 --data_dir /DISK1/data/datasets/COCO --dataset_name ydshieh/coco_dataset_script --dataset_config_name=2017 --image_column image_path --caption_column caption --remove_unused_columns=False --do_eval --per_device_eval_batch_size="64" --overwrite_output_dir --use_habana --use_lazy_mode --use_hpu_graphs_for_inference --gaudi_config_name Habana/clip --bf16 --mediapipe_dataloader

MediaPipe device GAUDI2 device_type GAUDI2 device_id 0 pipe_name ClipMediaPipe:0 [INFO|trainer.py:1759] 2024-08-01 11:39:44,968 >> Using HPU graphs for inference. [INFO|trainer.py:1779] 2024-08-01 11:39:44,968 >> Running Evaluation [INFO|trainer.py:1781] 2024-08-01 11:39:44,968 >> Num examples = 25014 [INFO|trainer.py:1784] 2024-08-01 11:39:44,968 >> Batch size = 64 Traceback (most recent call last): File "/root/optimum-habana/examples/contrastive-image-text/run_clip.py", line 611, in main() File "/root/optimum-habana/examples/contrastive-image-text/run_clip.py", line 586, in main metrics = trainer.evaluate() File "/usr/local/lib/python3.10/dist-packages/optimum/habana/transformers/trainer.py", line 1681, in evaluate output = eval_loop( File "/usr/local/lib/python3.10/dist-packages/optimum/habana/transformers/trainer.py", line 1831, in evaluation_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/usr/local/lib/python3.10/dist-packages/optimum/habana/transformers/trainer.py", line 2009, in prediction_step raise error File "/usr/local/lib/python3.10/dist-packages/optimum/habana/transformers/trainer.py", line 1986, in prediction_step loss, outputs = self.compute_loss(model, inputs, return_outputs=True) File "/usr/local/lib/python3.10/dist-packages/transformers/trainer.py", line 3161, in compute_loss outputs = model(inputs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1523, in _call_impl return forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 716, in forward return wrapped_hpugraph_forward( File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 594, in wrapped_hpugraph_forward outputs = orig_fwd(*args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 1115, in forward text_outputs = self.text_model( File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl result = forward_call(args, kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 697, in forward hidden_states = self.embeddings(input_ids=input_ids, position_ids=position_ids) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl result = forward_call(args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/transformers/models/clip/modeling_clip.py", line 225, in forward embeddings = inputs_embeds + position_embeddings RuntimeError: Incompatible input shapes, broadcast not possible. Tensor1 Size: 768 128 64 Tensor2 Size: 768 77 1

Expected behavior

Tried "--model_name_or_path openai/clip-vit-large-patch14" and "--model_name_or_path openai/clip-vit-base-patch32", same error.

caijimin commented 1 month ago

Sorry, my own fault. Forget to save_pretrained("clip-roberta").