Closed bobiblazeski closed 3 years ago
Hi @bobiblazeski!
Indeed, the current examples use colored spheres, but they can be easily adapted to use monochrome spheres. You can achieve this by setting the color tensor to the intended value, then setting the requests_grad
parameter to False
.
This example is quite easy to reproduce when adding a few lines in each iteration of the optimization that calculates the lighting update to the color. However, I'm not sure whether that's of interest for you.
The relevant code for this example was not part of the code release, hence it's not easy to add. But I'd be happy to answer more related questions.
Hi @classner First thank you for your great work on pulsar and its integration within Pytorch3D framework.
I'm currently using my own differentiable raytracer to fit a mesh from an image. My raytracer relies on RT cores and is not optimized. I'm currently exploring the approach replacing the mesh with dense point cloud sampled from the surface of the mesh. The point will be rendered via pulsar which is much faster.
TL/DR What would be highly appreciated is an example how to fit point cloud to resemble a mesh, similar to Fit a mesh via rendering tutorial, but without textures. Or the examples used in Differentiable Surface Splatting sphere to teapot
and cube to yoga
The procedure would be :
Fit point cloud to resemble the shape of the teapot
If this is too much work just example of how to render the point cloud sampled from a mesh would suffice. I think that includes your suggestion:
This example is quite easy to reproduce when adding a few lines in each iteration of the optimization that calculates the lighting update to the color.
which I'm interested to see how its done. From reading Pulsar and DSS papers, DSS splats have normals integrated into them, while Pulsar is using extra channels. I don't understand how to add those extra channels.
I've tried using the _apply_lighting method and this is how it looks like
I suspect the sampling of the normals.
My full notebook is here the interesthing part is the last code cell
points_fragments = points_renderer.rasterizer(point_cloud)
lights = renderer.shader.lights
materials = renderer.shader.materials
N, H, W, K = points_fragments.idx.shape
D = 4 # Fixed probably. what about alpha ???
pix_to_point = points_fragments.idx
# Replace empty pixels in pix_to_face with 0 in order to interpolate.
mask = pix_to_point < 0
pix_to_point = pix_to_point.clone()
pix_to_point[mask] = 0
print('pix_to_point.shape', pix_to_point.shape)
idx = pix_to_point.view(N * H * W * K, 1).expand(N * H * W * K, 3)
print('idx', idx.shape)
pixel_points_vals = point_cloud.points_packed().gather(0, idx.long()).view(N, H, W, K, 3)
pixel_points_vals.shape
pixel_normals_vals = point_cloud.normals_packed().gather(0, idx.long()).view(N, H, W, K, 3)
pixel_normals_vals.shape
ambient, diffuse, specular = _apply_lighting(
pixel_points_vals, pixel_normals_vals, lights, cameras, materials
)
colors = (ambient + diffuse)+ specular
dist_mask = torch.where(points_fragments.dists != -1,
points_fragments.dists,
torch.zeros_like(points_fragments.dists))
dist_mask_normed = F.normalize(dist_mask, p=1, dim=-1)
dist_mask.shape, dist_mask_normed.sum()
images = (dist_mask_normed[..., None] * colors).sum(dim=-2)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.grid("off")
plt.axis("off");
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.
This issue was closed because it has been stalled for 5 days with no activity.
❓ Questions on how to use PyTorch3D
I want to use Pulsar for fitting just the geometry using a monochrome image. From reading the @classner paper Pulsar: Efficient Sphere-based Neural Rendering the most similar example for what I want is Figure 4 (b) Silhouette-based deformation reconstruction :
Could you provide a code example of the fitting process. From what I see all the pulsar examples use colored spheres.