microsoft / DIF-Net

Deformed Implicit Field: Modeling 3D Shapes with Learned Dense Correspondence CVPR 2021
MIT License
121 stars 19 forks source link

texture transfer #4

Open Guptajakala opened 3 years ago

Guptajakala commented 3 years ago

Hi, thanks for the great work!

I'm doing a research project and your texture transfer is quite interesting and might be applicable for my case. Would you have any plan to release the code about how you produced figure10?

YuDeng commented 3 years ago

Hi, sorry that the texture transfer code is not available currently.

To achieve similar texture transfer result, here is a brief approach:

  1. Embed the source and target shapes into DIF-Net's latent space.
  2. Send the surface points of both shapes into the deformation network and get their corresponding deformation flows. Use these flows to deform the surface points into the canonical space.
  3. For the deformed surface points of the target shape, searching for their nearest neighbors of the source shape in the canonical space.
  4. Copy the color of the corresponding nearest points of the source shape to the target surface points.

After all these steps, you can get a texture transfer result from source to target.

Guptajakala commented 3 years ago

Hi, in figure 2 in the paper, the canonical space do you mean the s_tilta (before adding delta s) or the final s (adding delta s)?

YuDeng commented 3 years ago

The texture transfer stage does not need delta s which is a scalar adding on the SDF value of a point. To be exact, the point in the canonical space is p'.

So you first have some surface points p on a certain shape (which is the original shape space), and you can send them into the deform-net to get p' in the canonical space. Then you can use p' to calculate nearest neighbors and do the texture transfer. In this stage, there is no need to use the template field as well as delta s predicted by the deform-net.

Guptajakala commented 3 years ago
files = sorted(glob.glob('/home/plane/surface_pts_n_normal/*.mat'))
deforms = []
for i in [0,2]:
  shape = loadmat(files[i])['p']
  subject_id = torch.Tensor([i]).squeeze().long().cuda()[None,...]
  latent = model.get_latent_code(subject_id)
  coords = torch.from_numpy(shape[...,:3]).float().cuda().unsqueeze(0)
  deformed = model.get_template_coords(coords,latent)
  deformed = deformed[0].data.cpu().numpy()
  deforms.append(deformed)          # Find nearest points between these 2 deformed point cloud?

Thanks, now I understand better. I implemented this based on my understanding. Is this what you mean? For getting latent code, I'm not sure if the subject_idx is corresponding the same order as the files in the folder. The deformed point cloud looks a bit weird.

YuDeng commented 3 years ago

The code is exactly what I mean.

The subject_idx has the same order as the training subjects in split/train/xxx.txt instead of the point cloud provided in this repo (which are evaluation data).

As a result, one way is to first extract the shape surface of training subjects by running generate.py, and then send the surface points into the deform-net to conduct texture transfer.