NVIDIAGameWorks / kaolin

A PyTorch Library for Accelerating 3D Deep Learning Research
Apache License 2.0
4.31k stars 534 forks source link

Sampling colored point cloud from kaolin.io.obj.ObjMesh #625

Open qiminchen opened 1 year ago

qiminchen commented 1 year ago

Hi, are there any ways to sample colored point clouds from obj mesh loaded from kaolin.io.shapenet or kaolin.io.obj.import_mesh? Since it will return vertices, faces, uvs, face_uvs_idx, materials etc.

Caenorst commented 1 year ago

Hi @qiminchen , thank you for your interest in Kaolin.

Great timing! I just pushed an MR for data preprocessing, including a small example doing exactly what you are asking for!

qiminchen commented 1 year ago

Hi @Caenorst, thank you so much, this is definitely helpful! By the way, does it only work for ShapeNet.v2? What about v1

Caenorst commented 1 year ago

It should also work with V1 (just change the dataset name), I would be careful with ShapeNetV1 as I found out that a big part of the dataset is just unusable

qiminchen commented 1 year ago

perfect thanks, and yea I agree. Let me try it out very quick and I will let you know how it goes! Thank you so much

qiminchen commented 1 year ago

Hi @Caenorst, I just tested the fast_mesh_sampling.py and it works perfectly fine for ShapeNetV1 and V2 so far. Thank you so much, this saves me a lot of time! Really appreciate. Amazing job!

qiminchen commented 1 year ago

below example is 8a4f09913603d76cb8cf782e8c539948 from ShapeNet v1 Car category, 1st image is a snapshot of mesh in Meshlab, 2nd image is the sampled color point cloud using fast_mesh_sampling.py, I didn't include the vertex normal, otherwise it would look better.

snapshot00_L01 snapshot00_L00

Caenorst commented 1 year ago

Glad to hear that it's working well for you! Wxtracting normals should be pretty easy to extend, you should be able to get the normal using https://kaolin.readthedocs.io/en/latest/modules/kaolin.ops.mesh.html#kaolin.ops.mesh.face_normals (I need to verify the quality of normals stored in the obj, if any).

qiminchen commented 1 year ago

Hi @Caenorst, I'm not sure if this is an issue, I tested fffb1660a38af30ba4cf3601fb6b2442 from ShapeNet v2 Car category, the 1st image is the snapshot from Meshlab, the 2nd image is the sampled color point cloud (500k points), if you look at the roof, the color is not consistent, this issue happens in many shapes from v2, but code seems to work fine in v1, can you please take a look at this?

btw I convert color pixel range to [0, 255] using colors = (colors * 255).astype(np.uint8), I hope this does not cause the issue

snapshot00_L01 snapshot00_L00

Caenorst commented 1 year ago

The problem is from ShapeNet, some models have very weird internal structures that are overlapping with the visible shape. I've observed that when using the rasterization: typically you will have those glitches if you rasterize while ignoring face normals.

qiminchen commented 1 year ago

are there any ways to improve the sampling?

Caenorst commented 1 year ago

From the top of my head: 1) you can use rendering to generate some points, such as our rasterizer or you can use our Omniverse Kaolin App. You can also do a bunch of rasterization from different point of views 2) if a faces is not displayed in any rasterization then you can assume it's "not visible" and discard it.

Both methods are actually kinda similar

qiminchen commented 1 year ago

Thanks, I will try both. Please let me know if you have any updates on this MR for solving the glitches issue. Appreciate it.

muzhou-yu commented 1 year ago

Thanks, I will try both. Please let me know if you have any updates on this MR for solving the glitches issue. Appreciate it.

Hi, have you tried the method for solving the color inconsistent problem? I have also faced this issue. Thanks.