microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.6k stars 2.92k forks source link

Get results from Mask RCNN model with C++ #15541

Open oriolorra opened 1 year ago

oriolorra commented 1 year ago

Describe the issue

Hi everyone,

I have managed to run an inference with maskrccn onnx model with an image from coco dataset with C++. But, now, I am not able to extract bbox, labels, scores and mask from output_tensor. I tried to check to /onnxruntime-inference-examples/c_cxx/ repo, but I cannot solve it. Instead, in Python, it is pretty easy, but I must do it in c++.

Code:

 g_ort->ReleaseMemoryInfo(memory_info);
  const char *input_names[] = {"image"};
  const char *output_names[] = {"6568","6570", "6572", "6887"};

  // OrtValue *output_tensor = NULL;

  // OrtValue *output_tensor = NULL;
  const int num_outputs = sizeof(output_names) / sizeof(char *);
  OrtValue *output_tensor[num_outputs] = {NULL, NULL, NULL, NULL};

  LOG("running session")
  auto t1 = high_resolution_clock::now();
  OrtStatusPtr status = g_ort->Run(session, NULL, input_names, (const OrtValue *const *)&input_tensor, 1, output_names, 4, (OrtValue **)output_tensor);
  auto t2 = high_resolution_clock::now();
  auto ms_int = duration_cast<milliseconds>(t2 - t1);
  LOG("Inference Time:" << ms_int.count() << "ms");
// get the shape of the output tensor
  size_t num_dims;
  int64_t* dims;
  OrtTensorTypeAndShapeInfo* output_info;
  g_ort->CreateTensorTypeAndShapeInfo(&output_info);
  g_ort->GetTensorTypeAndShape(output_tensor[0], &output_info);
  g_ort->GetDimensionsCount(output_info, &num_dims);
  std::vector<int64_t> output_shape(num_dims);
  g_ort->GetDimensions(output_info, output_shape.data(), num_dims);

  std::cout << output_shape[0]<< " - "  << output_shape[1] << std::endl;

  g_ort->GetTensorTypeAndShape(output_tensor[1], &output_info);
  g_ort->GetDimensionsCount(output_info, &num_dims);
  std::vector<int64_t> output_shape2(num_dims);
  g_ort->GetDimensions(output_info, output_shape2.data(), num_dims);
  std::cout << output_shape2[0] << " - " << output_shape2[1] << std::endl;

  g_ort->GetTensorTypeAndShape(output_tensor[2], &output_info);
  g_ort->GetDimensionsCount(output_info, &num_dims);
  std::vector<int64_t> output_shape3(num_dims);
  g_ort->GetDimensions(output_info, output_shape3.data(), num_dims);
  std::cout << output_shape3[0]<< " - "  << output_shape3[1] << std::endl;

  g_ort->GetTensorTypeAndShape(output_tensor[3], &output_info);
  g_ort->GetDimensionsCount(output_info, &num_dims);
  std::vector<int64_t> output_shape4(num_dims);
  g_ort->GetDimensions(output_info, output_shape4.data(), num_dims);

  std::cout << output_shape4[0] << " - " << output_shape4[1] << std::endl;

I am getting these shapes: 13 - 4 13 - 0 13 - 0 13 - 1

But, in python, I am getting that there is 14 classes.

Can anyone give me a hand?

Thanks

To reproduce

Ubuntu 22.04 onnxruntime 1.14.1 CUDA 11.7 Onnx makrcnn model

Urgency

No response

Platform

Linux

OS Version

22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.14.1

ONNX Runtime API

C++

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 11.7

w1005444804 commented 1 year ago

Encountered similar problems but not resolved