microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.81k stars 2.94k forks source link

[DML EP] ORT would crash after deleting one of the models and then doing an inference #22948

Open klin2024 opened 16 hours ago

klin2024 commented 16 hours ago

Describe the issue

[DML EP] ORT would crash after deleting one of the models and then doing an inference

To reproduce

  1. Load multi-models.
  2. Delete the last model we load.
  3. Do an Inference

Urgency

No response

Platform

Windows

OS Version

26100.2314

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

1.20.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

DirectML

Execution Provider Library Version

No response

klin2024 commented 16 hours ago

Once the ExecutionProvider is released, m_allocator is released as well.

We have to set the allocator of the current ExecutionProvider to ExecutionContext in ExecutionProviderImpl::ExecuteOperator().

The issue will be gone after we make this modification.

Image