First of all thank you very much for sharing your open source code!I now have a question about the result rendering and tensor generation.
That's my first question. I looked at the example-data.pth in your open source model inference data. The post_tran and post_rot results were found inconsistent with pth files generated using ptq/duanp-data.py files.
My second question. Why use these two box values post_tran and post_rot in drawing results. The resulting detection box is already in 3D space, should not only use internal and external parameters when mapping the result back to the image?
It can be seen from the figure that the result drawn by 2 is more in line with the actual situation. In addition, the results of different post_tran and post_rot in bev view are almost the same.
First of all thank you very much for sharing your open source code!I now have a question about the result rendering and tensor generation.
That's my first question. I looked at the example-data.pth in your open source model inference data. The post_tran and post_rot results were found inconsistent with pth files generated using ptq/duanp-data.py files.
My second question. Why use these two box values post_tran and post_rot in drawing results. The resulting detection box is already in 3D space, should not only use internal and external parameters when mapping the result back to the image?
It can be seen from the figure that the result drawn by 2 is more in line with the actual situation. In addition, the results of different post_tran and post_rot in bev view are almost the same.
Looking forward to your reply very much.