EPFL-VILAB / omnidata

A Scalable Pipeline for Making Steerable Multi-Task Mid-Level Vision Datasets from 3D Scans [ICCV 2021]
Other
395 stars 49 forks source link

how to convert omnidata normal to pytorch3d coordinate #58

Closed lcc815 closed 10 months ago

lcc815 commented 10 months ago

hi,

is there any way to convert omnidata normal to pytorch3d coordinate?

I noticed #13 discussed this question, but what is the input x of _thunk function, and why should we do in that way?

thanks a lot!

alexsax commented 10 months ago

If you use the dataloader in this repo, it will use this function to output the normals in P3D coordinate space.

In the email you sent you asked

for omnidata, the straight up vector normal will be ~(0.5, 0., 0.5), which is a little strange. Images are stored as 3-channel uint8, I assume and most libraries load them as integers or into the range [0,1]. Which is why the function shifts and rescale them into the range [-1, 1] .

This is the colormap of the images on disk. The image is of a unit sphere image

Hope this helps :)

lcc815 commented 10 months ago

This helps! Thanks for your reply!