Figure out the coordinates and orientation of the three cameras in space, or its camera pose. The extrinsic parameters represent the location of the camera in the 3-D scene. Camera calibration or resectioning is the process of figuring out these parameters.
This might be a big project.
I tried to write some code that found the striped black & white posts [in future I'd like to avoid having them in the landscape at all anyway] - this didn't work brilliantly, then the idea was that it would find the location and orientation of the cameras in space. Code in beelabel repo)
I think this needs to be its own module (input: set of images, maybe a set of [approximate] locations of cameras and landmarks. output: their 3d locations and orientations).
Human-labelled data (via btviewer) may be necessary to make this work.
According to Mathworks:
The extrinsic parameters consist of a rotation, R, and a translation, t. The origin of the camera’s coordinate system is at its optical centre and its x- and y-axis define the image plane.
To-do list:
[ ] Package
[ ] Unit tests
[ ] Make it work on an in-field device, so we can get the cameras registered in 3d in the field, and then can test e.g. its all working, etc
Camera resectioning—extrinsic calibration Repo: https://github.com/SheffieldMLtracking/alignment
Camera calibration (finding camera pose/extrinsic position) — some code already (beelabel alignment.py)
Notes on camera calibration on Google Docs.
Figure out the coordinates and orientation of the three cameras in space, or its camera pose. The extrinsic parameters represent the location of the camera in the 3-D scene. Camera calibration or resectioning is the process of figuring out these parameters.
This might be a big project.
Human-labelled data (via
btviewer
) may be necessary to make this work.According to Mathworks:
To-do list: