Closed pk429 closed 1 week ago
I am not familiar with ONNX, so I am not sure what led to this problem.
However, if you want to debug the ONNX model, you can disable dynamic quadtree generation and use a single layer of quadtree for inference. For example, you can use quadtree layer 0 (see pet.py #Line324) or quadtree layer 1 (see pet.py #Line 332). This may be helpful to avoid dynamic output, but the performance may degrade.
I am not familiar with ONNX, so I am not sure what led to this problem.
However, if you want to debug the ONNX model, you can disable dynamic quadtree generation and use a single layer of quadtree for inference. For example, you can use quadtree layer 0 (see pet.py #Line324) or quadtree layer 1 (see pet.py #Line 332). This may be helpful to avoid dynamic output, but the performance may degrade.
Thank you for your help with the response. I have restructured the output tensor in the pet.py code and disabled the 'zero as placeholder' option in the Reshape of the generated ONNX model, allowing me to re-create the ENGINE file for deployment.
Glad to see that you have solved the problem.
Hello, I have a question. When I convert the generated model pt file to ONNX, I found that its output is dynamic. The number of prediction results output by test_forward in the forward process of pet changes with the image. I can run the inference correctly using ONNX.When I convert this onnx to engine and deploy it using C++ code, an error occurs:
ERROR: 3: [executionContext.cpp::enqueueInternal::795] Error Code 3: API Usage Error (Parameter check failed at: runtime/api/executionContext.cpp::enqueueInternal::795, condition: bindings[x] || nullBindingOK)
What should be paid attention to when converting the ONNX model to avoid dynamic output in onnx? Thanks.![image](https://github.com/cxliu0/PET/assets/105144975/d7884cc0-c512-40a7-91f2-e2c373895939)