XingangPan / GAN2Shape

Code for GAN2Shape (ICLR2021 oral)
https://arxiv.org/abs/2011.00844
MIT License
575 stars 101 forks source link

Excuse me, I want to ask you a question #22

Closed huyu-coder closed 3 years ago

huyu-coder commented 3 years ago

Hello,XingangPan!I feel that your method is very perfect, for 3D reconstruction tasks, in addition to the two evaluation indicators of MAD, SIDE to evaluate the quality of the model, and other evaluation indicators can be used as the evaluation criteria,I really want to write a new paper. I really want to write a new paper. In the case of studying your papers, can you think from other points? I am looking forward to your reply.Thanks!

XingangPan commented 3 years ago

@huyu-coder Hi, thanks for your interest in this work. Our work uses a depth map to represent a 3D shape, which is somewhat limited. One extension is to use a full 3D representation like a 3D mesh instead.

huyu-coder commented 3 years ago

@XingangPan Hi, thanks for your interest in this work. Our work uses a depth map to represent a 3D shape, which is somewhat limited. One extension is to use a full 3D representation like a 3D mesh instead.

I saw that the article of unsup3d is to save the 3D reconstructed shape in. Obj format. Is 3D mesh displayed in obj format? Do you mean to use your method to reconstruct the shape, then save it in obj format and display it in 3D?

huyu-coder commented 3 years ago

@XingangPan Hi, thanks for your interest in this work. Our work uses a depth map to represent a 3D shape, which is somewhat limited. One extension is to use a full 3D representation like a 3D mesh instead.

I saw that the article of unsup3d is to save the 3D reconstructed shape in. Obj format. Is 3D mesh displayed in obj format? Do you mean to use your method to reconstruct the shape, then save it in obj format and display it in 3D?

The following is the main code saved in obj format after 3D reconstruction of unsup3d article. Do I need to transplant this part of the code to your code? Can this be used as an innovation point? Thank you for your answer!

## export to obj strings
        vertices = self.depth_to_3d_grid(self.canon_depth)  # BxHxWx3
        self.objs, self.mtls = export_to_obj_string(vertices, self.canon_normal)

def export_to_objstring(vertices, normal): b, h, w, = vertices.shape vertices[:,:,:,1:2] = -1vertices[:,:,:,1:2] # flip y vertices[:,:,:,2:3] = 1-vertices[:,:,:,2:3] # flip and shift z vertices = 100 vertices_center = nn.functional.avg_pool2d(vertices.permute(0,3,1,2), 2, stride=1).permute(0,2,3,1) vertices = torch.cat([vertices.view(b,hw,3), vertices_center.view(b,(h-1)(w-1),3)], 1)

vertice_textures = get_grid(b, h, w, normalize=True)  # BxHxWx2
vertice_textures[:,:,:,1:2] = -1*vertice_textures[:,:,:,1:2]  # flip y
vertice_textures_center = nn.functional.avg_pool2d(vertice_textures.permute(0,3,1,2), 2, stride=1).permute(0,2,3,1)
vertice_textures = torch.cat([vertice_textures.view(b,h*w,2), vertice_textures_center.view(b,(h-1)*(w-1),2)], 1) /2+0.5  # Bx(H*W)x2, [0,1]

vertice_normals = normal.clone()
vertice_normals[:,:,:,0:1] = -1*vertice_normals[:,:,:,0:1]
vertice_normals_center = nn.functional.avg_pool2d(vertice_normals.permute(0,3,1,2), 2, stride=1).permute(0,2,3,1)
vertice_normals_center = vertice_normals_center / (vertice_normals_center**2).sum(3, keepdim=True)**0.5
vertice_normals = torch.cat([vertice_normals.view(b,h*w,3), vertice_normals_center.view(b,(h-1)*(w-1),3)], 1)  # Bx(H*W)x2, [0,1]

idx_map = torch.arange(h*w).reshape(h,w)
idx_map_center = torch.arange((h-1)*(w-1)).reshape(h-1,w-1)
faces1 = torch.stack([idx_map[:h-1,:w-1], idx_map[1:,:w-1], idx_map_center+h*w], -1).reshape(-1,3).repeat(b,1,1).int()  # Bx((H-1)*(W-1))x4
faces2 = torch.stack([idx_map[1:,:w-1], idx_map[1:,1:], idx_map_center+h*w], -1).reshape(-1,3).repeat(b,1,1).int()  # Bx((H-1)*(W-1))x4
faces3 = torch.stack([idx_map[1:,1:], idx_map[:h-1,1:], idx_map_center+h*w], -1).reshape(-1,3).repeat(b,1,1).int()  # Bx((H-1)*(W-1))x4
faces4 = torch.stack([idx_map[:h-1,1:], idx_map[:h-1,:w-1], idx_map_center+h*w], -1).reshape(-1,3).repeat(b,1,1).int()  # Bx((H-1)*(W-1))x4
faces = torch.cat([faces1, faces2, faces3, faces4], 1)

objs = []
mtls = []
for bi in range(b):
    obj = "# OBJ File:"
    obj += "\n\nmtllib $MTLFILE"
    obj += "\n\n# vertices:"
    for v in vertices[bi]:
        obj += "\nv " + " ".join(["%.4f"%x for x in v])
    obj += "\n\n# vertice textures:"
    for vt in vertice_textures[bi]:
        obj += "\nvt " + " ".join(["%.4f"%x for x in vt])
    obj += "\n\n# vertice normals:"
    for vn in vertice_normals[bi]:
        obj += "\nvn " + " ".join(["%.4f"%x for x in vn])
    obj += "\n\n# faces:"
    obj += "\n\nusemtl tex"
    for f in faces[bi]:
        obj += "\nf " + " ".join(["%d/%d/%d"%(x+1,x+1,x+1) for x in f])
    objs += [obj]

    mtl = "newmtl tex"
    mtl += "\nKa 1.0000 1.0000 1.0000"
    mtl += "\nKd 1.0000 1.0000 1.0000"
    mtl += "\nKs 0.0000 0.0000 0.0000"
    mtl += "\nd 1.0"
    mtl += "\nillum 0"
    mtl += "\nmap_Kd $TXTFILE"
    mtls += [mtl]
return objs, mtls