facebookresearch / pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
https://pytorch3d.org/
Other
8.7k stars 1.3k forks source link

How can I export a Texture map? #1702

Closed naruki-segawa-78092 closed 9 months ago

naruki-segawa-78092 commented 9 months ago

❓ Questions on how to use PyTorch3D

I'm trying to optimize textures. I was able to obtain a texture that reasonably matched the desired texture. (reference to https://pytorch3d.org/tutorials/fit_textured_mesh) However, I cannot extract the optimized texture as an image. I would like to obtain an image similar to cow_texture.png. Please let me know if there is a better way. Thank you.

image

bottler commented 9 months ago

The function pytorch3d.vis.texturesuv_image_PIL in https://github.com/facebookresearch/pytorch3d/blob/main/pytorch3d/vis/texture_vis.py may be useful, if you set subsample=0. It should output a PIL image from the texture map? But you could also just take the core code it has

    from PIL import Image
    import numpy as np
    texture_image = texture.maps_padded()
    texture_array = (texture_image[texture_index] * 255).cpu().numpy().astype(np.uint8)

    image = Image.fromarray(texture_array)
naruki-segawa-78092 commented 9 months ago

Thank you for your comment! I could get a texture image. However the texture image looks like sparse and different color. Do you know the reason why the texture image becomes sparse or different color.

I show the multi view images, the texture image and a part of my code following.

optimization process

initial texture image

shpere_texture_image = torch.full([1, 1024, 1024, 3], 0.5, device=device, requires_grad=True)

The optimizer

optimizer = torch.optim.Adam([shpere_texture_image], lr=0.01) loop = tqdm(range(Niter))

for i in loop:

Initialize optimizer

optimizer.zero_grad()
new_src_mesh = mesh.clone()
# Add per vertex colors to texture the mesh
new_src_mesh.textures._maps_padded = shpere_texture_image
# Losses to smooth /regularize the mesh shape
loss = {k: torch.tensor(0.0, device=device) for k in losses}
update_mesh_shape_prior_losses(new_src_mesh, loss)
# to using just one view, this helps resolve ambiguities between updating
# mesh shape vs. updating mesh texture
for j in range(num_views):
    images_predicted = renderer_textured(new_src_mesh, cameras=target_cameras[j], lights=lights)
    # Squared L2 distance between the predicted RGB image and the target 
    # image from our dataset
    predicted_rgb = images_predicted[..., :3]
    loss_rgb = ((predicted_rgb - target_rgb[j]) ** 2).mean()
    loss["rgb"] += loss_rgb / num_views

# Weighted sum of the losses
sum_loss = torch.tensor(0.0, device=device)
for k, l in loss.items():
    sum_loss += l * losses[k]["weight"]
    losses[k]["values"].append(float(l.detach().cpu()))

# Print the losses
loop.set_description("total_loss = %.6f" % sum_loss)

# Optimization step
sum_loss.backward()
optimizer.step()

###########################################################################

texture image show process

texture = new_src_mesh.textures texture_image = texture.maps_padded() texture_array = (texture_image[0].detach() * 255).cpu().numpy().astype(np.uint8) image = Image.fromarray(texture_array) image.show()

output test

bottler commented 9 months ago

I think this is a modelling question. The image starts grey and most of its 1024x1024 points are never touched by the learning so they stay grey. A few points are all that matters. I think the rest is just what the model does, I don't think we can help. (e.g. as for why so many points are a bit more bluish than the expected whiteish, maybe your lights are a bit reddish?)