alexsax / 2D-3D-Semantics

The data skeleton from Joint 2D-3D-Semantic Data for Indoor Scene Understanding
http://3dsemantics.stanford.edu
Apache License 2.0
464 stars 67 forks source link

Questions about the RGB-D sequence rendering #21

Closed JiamingSuen closed 5 years ago

JiamingSuen commented 5 years ago

Hello Stanford Team, thanks for this amazing work! I wonder if it's possible to render sequential RGB-D frames(as if they are captured by a hand-held RGB-D camera) with the meshes provided in the dataset. In order to use this dataset for many real-world tasks, we must assume the input is in raw RGB-D sequence format. If you do believe it's possible, would you be kind enough to make your Blender rendering pipeline(and code presumably) public? Thanks for your time!

amir32002 commented 5 years ago

You case use Gibson environment to render a stream of RGB-D frames http://gibsonenv.stanford.edu/#main https://github.com/StanfordVL/GibsonEnv

JiamingSuen commented 5 years ago

Great! I'll give it a try, thanks for the reply.

cazhang commented 5 years ago

@JiamingSuen hi, i'm wondering if you got rendering working using the 3d meshes? For me, i'd like to render some images using the provided mesh and camera poses. however, it seems the camera poses are not consistent for this. Thanks!