pykeio / ort

Fast ML inference & training for Rust with ONNX Runtime
https://ort.pyke.io/
Apache License 2.0
859 stars 100 forks source link

try_extract_tensor fails with UnknownAllocationDevice on DirectMLExecutionProvider #253

Closed constfold closed 2 months ago

constfold commented 2 months ago

Here's my code:

let dml = DirectMLExecutionProvider::default();
assert!(dml.is_available().unwrap());
ort::init()
    .with_execution_providers([dml.build()])
    .commit()
    .unwrap();

let model = ort::Session::builder()
    .unwrap()
    .commit_from_file(r"mobilenet_v2.onnx")
    .unwrap();
let image = imread_def("test.png").unwrap();
let blob = blob_from_image(
    &region,
    1.0 / 255.0,
    (224, 224).into(),
    (0.485, 0.456, 0.406).into(),
    true,
    false,
    CV_32F,
)
.unwrap();
let input = Tensor::from_array(([1, 3, 224, 224], blob.data_typed::<f32>().unwrap())).unwrap();

let outputs = model.run(ort::inputs![input].unwrap()).unwrap();

let output = outputs[0].try_extract_tensor::<f32>().unwrap();

it fails with:

called `Result::unwrap()` on an `Err` value: UnknownAllocationDevice("DML CPU")

dependencies:

[dependencies]
opencv = "0.92.2"
ort = { version = "2.0.0-rc.4", features = ["directml"] }
marcown commented 2 months ago

@decahedron1 I get the same Issue using

XNNPACKExecutionProvider::default().build()

and

ort = { version = "2.0.0-rc.5", features = ["xnnpack"] }

What could be the problem?