triton-inference-server / server

The Triton Inference Server provides an optimized cloud and edge inferencing solution.
https://docs.nvidia.com/deeplearning/triton-inference-server/user-guide/docs/index.html
BSD 3-Clause "New" or "Revised" License
8.27k stars 1.47k forks source link

Unexpected reshaping of output #7189

Open lemousehunter opened 5 months ago

lemousehunter commented 5 months ago

Description I have specified [-1, 1024] as the output dimensions for my ensemble model, but the output is still reshaped to [1024].

Triton Information NVIDIA Release 24.03 (build 86102629) Triton Server Version 2.44.0

Are you using the Triton container or did you build it yourself? I am using the NGC Triton Container

To Reproduce

  1. Do not use dynamic batching for Ensemble model
  2. Use dynamic batching for the last model before output
  3. Set output dims of final model in ensemble to be [1024]
  4. Set output dims of the Ensemble model to be [-1, 1024]

Describe the models (framework, inputs, outputs), ideally include the model configuration file (if using an ensemble include the model configuration file for that as well).

Backend of last model in ensemble: ONNX Runtime

Expected behavior Expected no reshaping of output since batched output of last model in ensemble has the same dimensions of the specified output of the ensemble model. bge-m3_config.zip

lemousehunter commented 5 months ago

For more context: I am trying to replicate the multi-text embedding generation in a single request. The output of the BGE-m3 is (2, 1024) for a Text input of (2, ). However, the Ensemble model still returns an output of (2048, ) instead (the bge-m3 output is flattened by the forced reshaping).