Closed StarCycle closed 2 months ago
Hi, thanks for your interest in our work! You can always get camera parameters from the simulator during both training and evaluation. See this function for reference https://github.com/nickgkan/3d_diffuser_actor/blob/master/online_evaluation_calvin/evaluate_utils.py#L50. In brief, you have access to env
, so you can obtain the cameras and their parameters.
If you're referring to the offline validation split, then check our preprocessing script and specifically this function https://github.com/nickgkan/3d_diffuser_actor/blob/master/data_preprocessing/package_calvin.py#L171.
I hope this answers your question.
Great thanks to the answer!
Hello @buttomnutstoast @jesbu1 @nickgkan,
It's a very nice work!
My question: How do you acquire camera extrinsics of CALVIN, in both training and evaluation?
For training, the CALVIN dataset format only provides rgb & depth, instead of camera extrinsics. Since you used point clouds in CALVIN, I guess you have camera extrinsics. But how to get these from the dataset?
During evaluation, does the CALVIN simulator also provide camera extrinsics? What modification did you use to acquire this info?
Best, StarCycle