microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.64k stars 2.92k forks source link

Help needed to export in ONNX #21282

Closed ladanisavan closed 4 months ago

ladanisavan commented 4 months ago

Describe the issue

Hi there,

I'm seeking guidance on exporting a custom fine-tuned Phi-3 Vision model to ONNX. I've followed the ONNX build model guide from this link.

The build command I used was: python3 -m onnxruntime_genai.models.builder -i ep_2_grad_32_lr_3e-5/ -o onnx_output/ -p int4 -e cuda --extra_options int4_block_size=32 int4_accuracy_level=4

The build process was successful and generated the following files:

However, the number of files generated doesn't match the file count in the official HF repo for ONNX microsoft/Phi-3-vision-128k-instruct-onnx-cuda

Files highlighted in red below are missing:

Screenshot 2024-07-05 at 5 56 00 PM

Additionally, while loading the model using ONNX Runtime, the following error occurs: OrtException: Load model from onnx_output failed: Protobuf parsing failed.

I have also noticed that sections for "embedding" and "vision" are missing from the genai_config.json

Can someone help me identify if I'm missing anything? Thanks

To reproduce

follow the steps provided above

Urgency

No response

Platform

Linux

OS Version

Ubuntu 24.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

0.3.0

ONNX Runtime API

Python

Architecture

X86

Execution Provider

CUDA

Execution Provider Library Version

CUDA 12.1

yuslepukhin commented 4 months ago

You should probably file this issue with GenAI library.

kunal-vaishnavi commented 4 months ago

We can close this issue since it is already filed in ONNX Runtime GenAI here.