NVlabs / eg3d

Other
3.24k stars 363 forks source link

How to generate mesh with texture #49

Open XiongFenghhh opened 2 years ago

XiongFenghhh commented 2 years ago

Thank you for your great work! I am wondering how to generate mesh with texture. Currently, I try to project the result mesh vertices, given the default camera parameters, back to the generated image and obtain the vertices' color from the image reversely. However, something seems wrong during the projection. For example, in Blender, if I set the camera parameter the same as eg3d's demo and import the generated mesh, the visual range of the mesh in the camera perspective is inconsistent with the generated image. How can I fix this? Also, is there a more elegant way to generate mesh with texure in eg3d?

XiongFenghhh commented 2 years ago

I've figured out the first issue, it seems Blender calculate the intrinsics in a different way. If I directly set the focal length rather than FOV in blender, the mesh that can be visualized in camera perspective is the same as generated image.

Still, it would be very helpful if there is a more direct method to generate textured mesh from eg3d.

yixiang1120 commented 1 year ago

Can I ask you how to generate mesh with texture? Thanks

XiongFenghhh commented 1 year ago

Since the mesh's vertex is generated from a cube space (-1~1 in x,y,z direction if I am not mistaken ), we simply calculate the uv coordinate for every point based on the camera projection process . For example: u=x*f/(c-z), c---the distance of camera's origin from the world's origin, f----the focal length of camera.

Then the Image generated from eg3d can be used as the texture .

xll2001 commented 1 year ago

Since the mesh's vertex is generated from a cube space (-1~1 in x,y,z direction if I am not mistaken ), we simply calculate the uv coordinate for every point based on the camera projection process . For example: u=x*f/(c-z), c---the distance of camera's origin from the world's origin, f----the focal length of camera.

Then the Image generated from eg3d can be used as the texture .

I wonder details about how you generate mesh with texture. Did you modify the method on converting .mrc file to .ply file and add the texture information in the generated .ply file. Can you show some implemention details? Thanks.

XiongFenghhh commented 1 year ago

Since the mesh's vertex is generated from a cube space (-1~1 in x,y,z direction if I am not mistaken ), we simply calculate the uv coordinate for every point based on the camera projection process . For example: u=x*f/(c-z), c---the distance of camera's origin from the world's origin, f----the focal length of camera. Then the Image generated from eg3d can be used as the texture .

I wonder details about how you generate mesh with texture. Did you modify the method on converting .mrc file to .ply file and add the texture information in the generated .ply file. Can you show some implemention details? Thanks.

Hello, the origin eg3d can generate ply file. With this ply file, you can get the coordinate of every vertex (but i forget it is world coordinate or camera coordinate, you may try it yourself). After that, the uv coordinate can be calculated according to the camera projection model.

Here is the code sample (this is just to show the principle, the running result is not necessarily correct). Some hyperparameters may be found with Blender.


cam2world = np.array([0.9999325275421143,
                0.009363078512251377,
                0.006880555767565966,
                -0.015562131464137834,
                0.009887207299470901,
                -0.9966999888420105,
                -0.08056901395320892,
                0.20793467907263818,
                0.006103476509451866,
                0.08063160628080368,
                -0.9967252612113953,
                2.691936289978508,0,0,0,1.]).reshape((4,4))

def project_vertex(v,img,use_world=False):
    if not use_world:
        p_v = v[...,:2]/(2.7-v[...,2:])*4.4652*2
        # p_v = v[...,:2]/(2.7-v[...,2:])*4.2647*2
        p_v[:,1]=  - p_v[:,1]
    else:
        # p_v = -np.matmul(v,cam2world[:3,:3].T)+cam2world[:3,3]
        p_v = np.matmul(cam2world[:3,3]-v,cam2world[:3,:3])
        p_v = p_v[...,:2]/p_v[...,2:]*4.2647*2

    # p_v[:,0]= -p_v[:,0]
    image_pil = Image.open(img).convert('RGB')
    image_tensor = torch.tensor(np.array(image_pil),dtype=torch.float32).cuda().unsqueeze(0).permute((0,3,1,2))
    grid = torch.tensor(p_v,dtype=torch.float32).cuda().reshape((1,1,-1,2))
    uv = (p_v+1)/2
    uv[:,1]=1-uv[:,1]
    color_res = F.grid_sample(image_tensor,grid,align_corners=True,padding_mode='zeros')
    return color_res,uv

plydata = PlyData.read(args.mesh)
vs = np.stack([-plydata['vertex']['x'],plydata['vertex']['y'],plydata['vertex']['z']],axis=1)
vs_color,uv = project_vertex(vs,your_img_name)
xll2001 commented 1 year ago

Since the mesh's vertex is generated from a cube space (-1~1 in x,y,z direction if I am not mistaken ), we simply calculate the uv coordinate for every point based on the camera projection process . For example: u=x*f/(c-z), c---the distance of camera's origin from the world's origin, f----the focal length of camera. Then the Image generated from eg3d can be used as the texture .

I wonder details about how you generate mesh with texture. Did you modify the method on converting .mrc file to .ply file and add the texture information in the generated .ply file. Can you show some implemention details? Thanks.

Hello, the origin eg3d can generate ply file. With this ply file, you can get the coordinate of every vertex (but i forget it is world coordinate or camera coordinate, you may try it yourself). After that, the uv coordinate can be calculated according to the camera projection model.

Here is the code sample (this is just to show the principle, the running result is not necessarily correct). Some hyperparameters may be found with Blender.

cam2world = np.array([0.9999325275421143,
                0.009363078512251377,
                0.006880555767565966,
                -0.015562131464137834,
                0.009887207299470901,
                -0.9966999888420105,
                -0.08056901395320892,
                0.20793467907263818,
                0.006103476509451866,
                0.08063160628080368,
                -0.9967252612113953,
                2.691936289978508,0,0,0,1.]).reshape((4,4))

def project_vertex(v,img,use_world=False):
    if not use_world:
        p_v = v[...,:2]/(2.7-v[...,2:])*4.4652*2
        # p_v = v[...,:2]/(2.7-v[...,2:])*4.2647*2
        p_v[:,1]=  - p_v[:,1]
    else:
        # p_v = -np.matmul(v,cam2world[:3,:3].T)+cam2world[:3,3]
        p_v = np.matmul(cam2world[:3,3]-v,cam2world[:3,:3])
        p_v = p_v[...,:2]/p_v[...,2:]*4.2647*2

    # p_v[:,0]= -p_v[:,0]
    image_pil = Image.open(img).convert('RGB')
    image_tensor = torch.tensor(np.array(image_pil),dtype=torch.float32).cuda().unsqueeze(0).permute((0,3,1,2))
    grid = torch.tensor(p_v,dtype=torch.float32).cuda().reshape((1,1,-1,2))
    uv = (p_v+1)/2
    uv[:,1]=1-uv[:,1]
    color_res = F.grid_sample(image_tensor,grid,align_corners=True,padding_mode='zeros')
    return color_res,uv

plydata = PlyData.read(args.mesh)
vs = np.stack([-plydata['vertex']['x'],plydata['vertex']['y'],plydata['vertex']['z']],axis=1)
vs_color,uv = project_vertex(vs,your_img_name)

Thanks for your reply! But I still failed to generate meshes with texture using the code. I don't understand why you use -plydata['vertex']['x'] to project vertex. And in my implemention, i found that the flow-field 'grid' is not in [-1, 1] but larger that 1. So the texture generated is a while black picture as we set parameter padding_mode to 'zeros' in F.grid_sample function. Here is my code of funtion project_vertex:

def project_vertex(v, img):
    p_v = v[...,:2] * 4.2647 / v[...,2:] + 0.5
    image_pil = PIL.Image.open(img).convert('RGB')
    image_tensor = torch.tensor(np.array(image_pil),dtype=torch.float32).cuda().unsqueeze(0).permute((0,3,1,2))
    grid = torch.tensor(p_v,dtype=torch.float32).cuda().reshape((1,1,-1,2))
    color_res = F.grid_sample(image_tensor,grid,align_corners=True,padding_mode='zeros')
    return color_res.cpu().numpy()

After I generated the .ply file with EG3D, I pass the vertex to project_vertex to generate texture:

plydata = plyfile.PlyData.read(plyfile_path))
vs = np.stack([plydata['vertex']['x'],plydata['vertex']['y'],plydata['vertex']['z']],axis=1)
vs_color,uv = project_vertex(vs, my_img_name)
SlimeVRX commented 1 year ago

I am also interested in this.

My goal: is to import highly detailed meshes and textures into 3D software like Blender, Unreal for rendering.

How do export mesh and textures (diffuse, specular, normal) or .obj file?

Like image

image