microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.86k stars 2.94k forks source link

XGBoost converter output size shape warning and multiclass prediction error #20908

Open danyi211 opened 6 months ago

danyi211 commented 6 months ago

Describe the issue

Issue 1: Converted an xgboost binary:logistic model into ONNX, and inference with ONNXRuntime session gives a warning message:

Original XGBoost Model (load from pickle) Predictions: [0.9994362]
ONNX output names: ['label', 'probabilities']
2024-06-03 15:21:38.723289 [W:onnxruntime:, execution_frame.cc:879 VerifyOutputSizes] Expected shape from model of {-1,1} does not match actual shape of {1,2} for output probabilities
ONNX Model Predictions: [array([1], dtype=int64), array([[5.6380033e-04, 9.9943620e-01]], dtype=float32)]

The xgboost model prediction is a single value of the probability of class1, while the ONNXRuntime prediction gives a vector of two probabilities [score_class_0, score_class_1], where the score_class_1 value matches with the xgboost model prediction on the same input vector. The shape of ONNX obviously doesn't match with the xgboost model, so the warning comes up.

Is there a way to suppress this warning, or improve the onnx output for xgboost binary classification?

Issue 2: Converted an xgboost multi:softmax model (multiclass classification with num_class=2) into ONNX, and inference with ONNXRuntime session.

The xgboost model prediction is a vector of two probabilities [score_class_0, score_class_1]. The ONNXRuntime prediction also gives a vector of two probabilities, but the values are inconsistent with the xgboost model on the same input vector.

With the same test procedure, I got the following output:

Original XGBoost Model (load from pickle) Predictions: [[3.5089892e-04 9.9964905e-01]]
ONNX output names: ['label', 'probabilities']
ONNX Model Predictions: [array([0], dtype=int64), array([[0.99868447, 0.0013155 ]], dtype=float32)]

To reproduce

import onnxmltools
from onnxmltools.convert.common.data_types import FloatTensorType
import onnx
import onnxruntime as ort

# Load the ONNX model
onnx_model_path = f"{model_name}.onnx"
onnx_model = onnx.load(onnx_model_path)

# Create an ONNX Runtime session
ort_session = ort.InferenceSession(onnx_model_path)

# Inspect the output names
output_names = [output.name for output in ort_session.get_outputs()]
print("ONNX output names:", output_names)

# Prepare the sample input for ONNX (ensure the input format matches)
sample_input_onnx = sample_input.astype(np.float32)
input_name = ort_session.get_inputs()[0].name
print("Input Name:", input_name)
print("Sample Input Shape:", sample_input_onnx.shape)
print("Sample Input:", sample_input_onnx)

# Predict using the ONNX model
onnx_predictions = ort_session.run(None, {input_name: sample_input_onnx})
print("ONNX Model Predictions:", onnx_predictions)

Urgency

No response

Platform

Mac

OS Version

14.1.1

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.0

ONNX Runtime API

Python

Architecture

ARM64

Execution Provider

Default CPU

Execution Provider Library Version

No response

danyi211 commented 6 months ago

Updating the onnxmltools & onnxconverter-common to the HEAD on github fixes the warning in the first issue.

github-actions[bot] commented 5 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.

MiuNa-Yang commented 1 week ago

same warning

JD557 commented 1 week ago

I believe the issue is on the tools not being released in a long time.

As mentioned above, installing from HEAD fixed it for me: pip install git+https://github.com/onnx/onnxmltools git+https://github.com/microsoft/onnxconverter-common

danyi211 commented 1 week ago

Hi, I think Issue 2 is not solved yet... can anyone help look into this? Thanks