GPUOpen-LibrariesAndSDKs / RadeonML

https://gpuopen.com/radeon-prorender-suite/
Other
84 stars 11 forks source link

RML fails to load a simple model that onnxruntime has no issues with #15

Open shehrzad opened 3 years ago

shehrzad commented 3 years ago

This ONNX model cannot be ingested by the RML:

rml::Context context = rml::CreateDefaultContext();
std::wstring model_path(L"C:\\path\\to\\nsnet2-20ms-baseline.onnx");
rml::Graph graph = rml::LoadGraphFromFile(model_path); // throws exception

The specific error is:

INFO: rmlCreateDefaultContext(params=NULL, context=00000061D3CFF648)
INFO: Using D3D12 device: AMD Radeon(TM) Graphics
INFO: Model info:
 domain:
 ir_version: 6
 producer_name: pytorch
 producer_version: 1.6
 version: 0
 description:
 opset domain: opset version11
ERROR: Unknown layout, shape: (0, 0, 161)

In fact, with other frameworks the input tensor dimensions should are (-1, -1, 161) where -1 is "unknown". With a batch size of 1 and a single frame of data, the python onnxruntime allows inference just fine. With the C++ onnxruntime library, using the code at the end of this issue, the expected input and output tensor dimensions are printed. I really need to use the Radeon GPU shaders for inference, but in it's current state Radeon-ML isn't working for me.

ORT Telemetry: Ver = 1.7.0; Event = SessionCreation
Number of Input Nodes: 1
Number of Output Nodes: 1
Input Name: input
Input Type: float
Input Dimensions: [-1, -1, 161]
Output Name: output
Output Type: float
Output Dimensions: [-1, -1, 161]
int main()
{
    std::string instanceName{ "nsnet2" };
    std::wstring modelFilepath{ L"C:\\path\\to\\nsnet2-20ms-baseline.onnx" };

    // https://github.com/microsoft/onnxruntime/blob/rel-1.6.0/include/onnxruntime/core/session/onnxruntime_c_api.h#L123
    Ort::Env env(OrtLoggingLevel::ORT_LOGGING_LEVEL_WARNING, instanceName.c_str());
    Ort::SessionOptions sessionOptions;
    sessionOptions.SetIntraOpNumThreads(1);

    sessionOptions.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_EXTENDED);

    Ort::Session session(env, modelFilepath.c_str(), sessionOptions);

    Ort::AllocatorWithDefaultOptions allocator;

    size_t numInputNodes = session.GetInputCount();
    size_t numOutputNodes = session.GetOutputCount();

    std::cout << "Number of Input Nodes: " << numInputNodes << std::endl;
    std::cout << "Number of Output Nodes: " << numOutputNodes << std::endl;

    const char* inputName = session.GetInputName(0, allocator);
    std::cout << "Input Name: " << inputName << std::endl;

    Ort::TypeInfo inputTypeInfo = session.GetInputTypeInfo(0);
    auto inputTensorInfo = inputTypeInfo.GetTensorTypeAndShapeInfo();

    ONNXTensorElementDataType inputType = inputTensorInfo.GetElementType();
    std::cout << "Input Type: " << inputType << std::endl;

    std::vector<int64_t> inputDims = inputTensorInfo.GetShape();
    std::cout << "Input Dimensions: " << inputDims << std::endl;

    const char* outputName = session.GetOutputName(0, allocator);
    std::cout << "Output Name: " << outputName << std::endl;

    Ort::TypeInfo outputTypeInfo = session.GetOutputTypeInfo(0);
    auto outputTensorInfo = outputTypeInfo.GetTensorTypeAndShapeInfo();

    ONNXTensorElementDataType outputType = outputTensorInfo.GetElementType();
    std::cout << "Output Type: " << outputType << std::endl;

    std::vector<int64_t> outputDims = outputTensorInfo.GetShape();
    std::cout << "Output Dimensions: " << outputDims << std::endl;

    return 0;
}
BenjaminCoquelle commented 3 years ago

Hi, The problem is actually not the unknown dimension that we actually support, but the shape itself which is rank-3 and we don't support at the moment as we only support computer vision type of model for now RadeonML is still very much a beta library and we would like to get inputs on users and their requirements. Could we exchange in private to that matter.

Thanks Benjamin

shehrzad commented 3 years ago

Thanks Benjamin,

So if I understand you correctly RadeonML is expecting a rank-4 input tensor, i.e. NCHW a la TensorRT or Tensorflow? Yes we can exchange in private with regards to this matter.