Closed yjmwolf closed 2 years ago
@yjmwolf The issue you described usually caused by native memory leak. If all the resources are closed properly, you should not see crash. We have many customer run continuously inference with pytorch, and they didn't notice memory leak.
Can you create a minimal reproducible project, so we can look into it.
Feel free to re-open this issue if you can provide an reproducible project.
I find the a same problem of djl pytorch inference api which also exist in libtorch_java_only api. The memory of JVM is stable, but the whole memory of server increased several days until crashed, it seems like caused by direct memory. I found this issue of libtorch_java_only api, so I try to use djl pytorch inference api , but it can't be solved . My predict part code such as: