Open anjugopinath opened 3 years ago
Which annotation information do you have?
I have the bounding box information. Today, the group maintaining the dataset also shared the link with camera intrinsic parameters information [https://argus.web.unc.edu/camera-calibration-database/] This is for a set of GoPro cameras which was used to record the videos/images.
From the above link, is it possible to obtain the following parameters required by camera.json? [x,y,z] (camera position) 3x3 list (camera rotation matrix) [focal_x, focal_y] (focal length of x and y axis) [princpt_x, princpt_y] (principal point of x and y axis)
Also, how can one obtain 3D joint coordinates in the world coordinate system required by joint3d.json? I have only RGB images.
You cannot obtain true 3D data from a single RGB image. Please see multi-view geometry theory.
Meaning, to train Internet, can I only use images with ground truth annotations already present? I would like to train Internet on epic kitchens . [https://epic-kitchens.github.io/2021]
Yes you need 3D annotations. You can obtain 3D pseudo-annotations from 2D annotations using SMPLify-X.
I want to train Internet on a different image dataset. But, the information on camera parameters is not available. Is it possible to proceed?