facebookresearch / pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
https://pytorch3d.org/
Other
8.7k stars 1.3k forks source link

how to apply pytorch3d to reconstruct 3d face pointclouds using the rgb and deepmap of one face (both got from pytorch3d) #1828

Closed yxwktdk closed 2 months ago

yxwktdk commented 2 months ago

I have got the rgb as well as deepmap of a face by the following code:

cameras = PerspectiveCameras(device=device, R=R, T=T)

raster_settings = RasterizationSettings(
    image_size=1024, 
    blur_radius=0.0, 
    faces_per_pixel=1, 
)

lights = PointLights(device=device, location=[[0.0, 0.0, -3.0]])

rasterizer = MeshRasterizer(
        cameras=cameras, 
        raster_settings=raster_settings
    )
renderer = MeshRendererWithDepth(
    rasterizer,
    shader=SoftPhongShader(
        device=device, 
        cameras=cameras,
        lights=lights
    )
)

images, depths = renderer(mesh)

And now I want to use these information for reconstructing a face pointcloud. Can I realize this with Pytorch3d? If not, other method may need the information of camera like'scaling factor',how can I get scaling factor of the camera? Please help me solve this question, thanks!

bottler commented 2 months ago

RGBD images can be converted to pointcloud with get_rgbd_point_cloud from pytorch3d.implicitron.tools.point_cloud_utils . But if you have the mesh you may prefer to use sample_points_from_meshes. "scaling factor" may mean different things in different places - you need to look at the camera classes themselves.