pykeio / ort

Fast ML inference & training for Rust with ONNX Runtime
https://ort.pyke.io/
Apache License 2.0
859 stars 100 forks source link

Trouble running CUDA and/or DirectML #200

Closed jdiaz97 closed 5 months ago

jdiaz97 commented 5 months ago

Hello!

I have this code:

fn import_model(model_path: &str) -> ort::Session {
    INIT.call_once(|| {
        ort::init_from("binaries/onnxruntime.dll").commit().unwrap();
    });

    let cuda = CUDAExecutionProvider::default();
    let directml = DirectMLExecutionProvider::default();

    if !cuda.is_available().unwrap() {
        println!("You don't have CUDA!");
    } else {
        println!("You have CUDA!");
    }

    if !directml.is_available().unwrap() {
        println!("You don't have DirectML!");
    } else {
        println!("You have DirectML!");
    }

    let model = Session::builder()
        .unwrap()
        .with_optimization_level(GraphOptimizationLevel::Level3)
        .unwrap()
        .with_execution_providers([
            cuda.build(),
            directml.build(),
        ])
        .unwrap()
        .commit_from_file(model_path)
        .unwrap();

    return model;
}

and my dependency:

ort = { version = "2.0.0-rc.2", features = ["cuda","directml","load-dynamic","download-binaries"]}

what am I doing wrong?

decahedron1 commented 5 months ago

Resolved on Discord