Open HaoyiZhu opened 5 months ago
Some of the camera views are not perfectly calibrated. We're soon releasing some metadata which states which views are correctly calibrated within an error threshold and which are not, and with calibration values that are refined for multi-view alignment
Hi @AlexanderKhazatsky , I'm just wondering how is the metadata going on? Is there any ETA of it? Great thanks for your effort!
Sorry for the delay on this, we’re just waiting for some TRI legal permissions to go through so they can share everything with us. We’re expecting that to take one more week, and processing everything into RLDS format to take an additional week.
In the mean time, we can probably share the updated calibration values if that is sufficient for your purposes?
But the depth won't be ready for a little bit longer...
Hi @AlexanderKhazatsky, if I'm understanding correctly, it would be much appreciated if you could share updated camera intrinsic/extrinsic calibration values! I've been visualizing some of the data and it seems like there are some with wonky camera extrinsics. Thanks!
What I can share is some cam2cam transformations as well as some code for confirming which cam2base values are consistent, but we are still running it on the dataset to assess which trajectories are usable. So my guess is it's in your best interest to wait for us to finish that up, unless you want to help with the process to hurry it up!
Hi, thanks for the amazing work!
I'm wondering if you could kindly offer a demo script about how to get multi-view aligned point cloud in world coordinate?
I have tried, but I found the multi-view point clouds cannot be aligned well. I'm not sure if the camera poses are inaccurate.
I directly read intrinsics from your sov reader script, and convert extrinsic matrix using the following function:
The results seem not good:
Thanks!