facebookresearch / pytorch3d

PyTorch3D is FAIR's library of reusable components for deep learning with 3D data
https://pytorch3d.org/
Other
8.81k stars 1.32k forks source link

Color artifacts with fixed color shader #1733

Closed samraul closed 9 months ago

samraul commented 9 months ago

🐛 Bugs / Unexpected behaviors

I have color artifacts rendering with a fixed color shader.

As color As mask (same as color)

(Images are with perspective camera. Code below with orthographic camera has the same issue).

Instructions To Reproduce the Issue:

  1. Any changes you made (git diff) or code you wrote

    class UnlitColorShader(ShaderBase):
    """Shader that paints each face with the colors of its vertices, with no lighting."""
    
    def __init__(self, device="cpu", blend_params=None):
        super().__init__(device=device, blend_params=blend_params)
    
    def forward(self, fragments, meshes, **kwargs) -> torch.Tensor:
        blend_params = kwargs.get("blend_params", self.blend_params)
        texels = meshes.sample_textures(fragments)
        images = hard_rgb_blend(texels, fragments, blend_params)
        return images  # (N, H, W, 3) 
  2. The exact command(s) you ran:

  # -- Camera config
  R, T = look_at_view_transform(
      eye=((camera_position.x, camera_position.y, camera_position.z),),
      at=((camera_target.x, camera_target.y, camera_target.z),),
  )
  SCENE_SCALE = 1
  xy_extent = 0.5
  cameras = FoVOrthographicCameras(
      device=renderer.device,
      R=R,
      T=T,
      znear=0.001 * SCENE_SCALE,
      zfar=camera_pos.z * 1.1 * SCENE_SCALE,
      min_x=-SCENE_SCALE * xy_extent,
      max_x=SCENE_SCALE * xy_extent,
      min_y=-SCENE_SCALE * xy_extent,
      max_y=SCENE_SCALE * xy_extent,
  )

  ...
  # -- Relevant color config
  # Each quad has an assigned color, shared by the 4 vertices of the quad
  vertex_colors = np.array([fixed_colors[quad_type] * 4 for quad_type in mesh.quad_types])
  vertex_colors = vertex_colors.reshape(-1, 3)
  verts_features = torch.tensor(vertex_colors, dtype=torch.float32).unsqueeze(0)
  textures = TexturesVertex(verts_features=verts_features)  # `verts_features` expected to be (N, V, 3)

  ...
  # -- Rasterization code
  blend_params = BlendParams(background_color=[0,0,0])
  color_shader = UnlitColorShader(device=self.device, blend_params=blend_params)
  raster_settings = RasterizationSettings(image_size=self.image_size, blur_radius=0.0, faces_per_pixel=1)
  rasterizer = MeshRasterizer(cameras=cameras, raster_settings=raster_settings)
  renderer = MeshRenderer(rasterizer=rasterizer, shader=color_shader)
  images = renderer(self.meshes.to(self.device))
  1. What you observed (including the full logs):

I have successfully used similar code with similar parameters before, so I am unsure what can cause the given artifacts. I am using pytorch3d==0.7.3.

Any hints would be appreciated.

Thank you!

bottler commented 9 months ago

I don't know what you are expecting. What is the artifact?

Can you paste an image of the mesh, e.g. with plotly_vis, so we can see what it is?

samraul commented 9 months ago

Thanks, @bottler.

Certainly, let me clarify:

Generated image Computed mask Expected mask
    mask = np.any(
        [np.all(generated_image == color[category], axis=-1) for category in [C1, C2, C2]],
        axis=0,
    )

Both these pixels should be [80, 40, 60], however the one on the right shows [79, 39, 59] Good color Bad color

It looks like a precision issue, perhaps converting from normalized color to uint8. But I have never observed this before despite having used it in different meshes.

samraul commented 9 months ago

Ah, it definitely looks like a precision issue. I removed all planes except one, expecting only 4 unique values (3 values + 0.0), however:

        print(torch.unique(images))
        print(torch.unique(images * 255))
        as_np = images.cpu().squeeze().numpy()
        print(np.unique(as_np))
        print(np.unique(as_np * 255))
tensor([0.0000, 0.1569, 0.1569, 0.1569, 0.1569, 0.1569, 0.1569, 0.2353, 0.2353,
        0.2353, 0.2353, 0.2353, 0.2353, 0.2353, 0.3137, 0.3137, 0.3137, 0.3137,
        0.3137, 0.3137, 1.0000], device='cuda:0')
tensor([  0.0000,  40.0000,  40.0000,  40.0000,  40.0000,  40.0000,  40.0000,
         60.0000,  60.0000,  60.0000,  60.0000,  60.0000,  60.0000,  60.0000,
         80.0000,  80.0000,  80.0000,  80.0000,  80.0000,  80.0000, 255.0000],
       device='cuda:0')
[0.         0.15686272 0.15686274 0.15686275 0.15686277 0.15686278
 0.1568628  0.23529409 0.2352941  0.23529412 0.23529413 0.23529415
 0.23529416 0.23529418 0.31372544 0.31372547 0.3137255  0.31372553
 0.31372556 0.3137256  1.        ]
[  0.        39.999992  39.999996  40.        40.000004  40.000008
  40.00001   59.999992  59.999996  60.        60.000004  60.000008
  60.00001   60.000015  79.999985  79.99999   80.        80.00001
  80.000015  80.00002  255.      ]
samraul commented 9 months ago

Adding a small epsilon before conversion solves the issue.

        print("-" * 20)
        print(torch.unique(images))
        print(torch.unique(images * 255))

        print("-" * 20)
        epsilon = 0.0001
        as_np = images.cpu().squeeze().numpy()
        as_np_rounded_8 = np.round(as_np * 255 + epsilon).astype(np.uint8)        
        print(np.unique(as_np_rounded_8))
--------------------
tensor([0.0000, 0.1569, 0.1569, 0.1569, 0.1569, 0.1569, 0.1569, 0.2353, 0.2353,
        0.2353, 0.2353, 0.2353, 0.2353, 0.2353, 0.3137, 0.3137, 0.3137, 0.3137,
        0.3137, 0.3137, 1.0000], device='cuda:0')
tensor([  0.0000,  40.0000,  40.0000,  40.0000,  40.0000,  40.0000,  40.0000,
         60.0000,  60.0000,  60.0000,  60.0000,  60.0000,  60.0000,  60.0000,
         80.0000,  80.0000,  80.0000,  80.0000,  80.0000,  80.0000, 255.0000],
       device='cuda:0')
--------------------
[  0  40  60  80 255]

If anyone has a better way, please let me know, otherwise we can close the issue, since this is just inherent to python type conversions.

bottler commented 9 months ago

I think you've got a solution as good as any.

samraul commented 9 months ago

👍 Will close and leave here in case someone runs into similar artifacts. Thank you, @bottler.