Closed q751654992 closed 1 month ago
I have the same issue, and I think it's because the model under .u2net folder uses Int64 weight, whereas TensorRT doesn't natively support INT64, leading to significant time spent casting Int64 to Int32. Thats why GPU consumes more time than CPU. I believe the solution to the problem is to replace int32 model in .u2net folder and set MODEL_CHECKSUM_DISABLED=TRUE as mentioned in issue #496 I'm wasting time converting the int64 onnx model to int32 and have no idea, since im new to this. hope someone can help.
This issue is stale because it has been open for 30 days with no activity.
Has anyone solved this problem?
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.
then print
next
pip uninstall onnxruntime-gpu pip install onnxruntime
then print
I don't understand, Please tell me what is the problem