The input shape of our network is dynamic. When using onnxruntime with backend "Default CPU" to run the network, all is ok. While using onnxruntime with backend DNNL, the memory will keep increasing with various input shapes. However, if the shape of input is fixed,the memory will not increase. So we guess onnxruntime+DNNL will always allocate new memory with different input shapes of network?
The input shape of our network is dynamic. When using onnxruntime with backend "Default CPU" to run the network, all is ok. While using onnxruntime with backend DNNL, the memory will keep increasing with various input shapes. However, if the shape of input is fixed,the memory will not increase. So we guess onnxruntime+DNNL will always allocate new memory with different input shapes of network?