However, due to matrix multiplication optimizations, it would be better if we instead stored the pixels in a global array, and then multiplied that matrix by the inverse of the camera mat and the homography mat.
Moreover, self.cam_conf.homography_mat @ self.cam_conf.inv_camera_mat will always yield the same constant matrix during runtime, so computing it at launch and saving it in self.cam_conf would greatly reduce the overall number of matrix multiplications.
Currently we multiply the cone vertex inside the loop by the inverse camera and homography matrices as follows https://github.com/ARUSfs/DRIVERLESS/blob/8fff0c7d6cd45b3c4fcac8355e33f5a53642b664/src/perception/vision_cone_detector/src/estimator.py#L93-L101
However, due to matrix multiplication optimizations, it would be better if we instead stored the pixels in a global array, and then multiplied that matrix by the inverse of the camera mat and the homography mat.
Moreover,
self.cam_conf.homography_mat @ self.cam_conf.inv_camera_mat
will always yield the same constant matrix during runtime, so computing it at launch and saving it inself.cam_conf
would greatly reduce the overall number of matrix multiplications.