Closed zsh2000 closed 2 years ago
This isn't something the example script supports. You can get extrinsics with sim.get_agent(0).state.sensor_states
. We don't expose extrinsics currently, but it's a simple pinhole camera model so it's easy to compute them.
We have an example that's relevant here: https://aihabitat.org/docs/habitat-lab/view-transform-warp.html (that uses the slightly higher level habitat-lab API but it's all very similar). Note that that intrinsics matrix is only valid for square sensors.
Thanks for your reply! I'll try it out.
Thanks for your reply! I'll try it out.
Could you please tell me how to get the camera pose in details?
And also I meet a problem, when I use --semantic sensor
to get the output semantic image from replica dataset, but the ouput is wrong. How to generate semantic image? I guess my setting is wrong.
Thanks for your reply! I'll try it out.
Could you please tell me how to get the camera pose in details? And also I meet a problem, when I use
--semantic sensor
to get the output semantic image from replica dataset, but the ouput is wrong. How to generate semantic image? I guess my setting is wrong.
Hi, I was wondering if you solved this problem? I also encountered the same problem
Thanks for the awesome tool!
I have successfully rendered RGB images and semantic labels on Replica dataset using
python examples/example.py --save_png --semantic_sensor
I want to get the corresponding camera poses (better in the format of extrinsic matrix and instrinsic matrix) together with the rendered RGB images and semantic labels. But there is no options in "example.py" enabling the generation of poses.
Could you please tell me how to do it?