Open SYSUykLin opened 2 weeks ago
Can you share more code? What is the camera object?
Thanks for your reply. PerspectiveCameras:
def set_cameras(self, width, height, R, T, FOV=60):
# 通过FOV计算相机焦距
# (width / 2) / x_focal = tan(FOV / 2)
x_focal = (width / 2) / math.tan(math.radians(FOV / 2))
y_focal = (height / 2) / math.tan(math.radians(FOV / 2))
# PerspectiveCamera需要这个参数,我猜应该是相机对应到screen的中心
principal_point = torch.tensor([height / 2, width / 2], dtype=torch.float32, device=self.device).unsqueeze(0)
focal_length = torch.tensor([y_focal, x_focal], dtype=torch.float32, device=self.device).unsqueeze(0)
image_size = torch.tensor([height, width], dtype=torch.float32, device=self.device).unsqueeze(0)
cameras = PerspectiveCameras(
device=self.device,
R=R,
T=T,
principal_point=principal_point,
focal_length=focal_length,
in_ndc=False,
image_size=image_size
)
return cameras
Thanks for your reply. PerspectiveCameras:
def set_cameras(self, width, height, R, T, FOV=60): # 通过FOV计算相机焦距 # (width / 2) / x_focal = tan(FOV / 2) x_focal = (width / 2) / math.tan(math.radians(FOV / 2)) y_focal = (height / 2) / math.tan(math.radians(FOV / 2)) # PerspectiveCamera需要这个参数,我猜应该是相机对应到screen的中心 principal_point = torch.tensor([height / 2, width / 2], dtype=torch.float32, device=self.device).unsqueeze(0) focal_length = torch.tensor([y_focal, x_focal], dtype=torch.float32, device=self.device).unsqueeze(0) image_size = torch.tensor([height, width], dtype=torch.float32, device=self.device).unsqueeze(0) cameras = PerspectiveCameras( device=self.device, R=R, T=T, principal_point=principal_point, focal_length=focal_length, in_ndc=False, image_size=image_size ) return cameras
I did not provide the normal, this is the way what I load mesh:
def load_mesh_pytorch3d(file_path):
verts, faces_idx, aux = load_obj(file_path)
faces = faces_idx.verts_idx
vertices_color = load_mesh_vertices_color(file_path)
vertices_color = vertices_color[None] / 255
textures = TexturesVertex(verts_features=vertices_color)
mesh = Meshes(verts=[verts], faces=[faces], textures=textures)
return mesh
Is the floor made of a few very large faces? Playing with cull_to_frustrum
might help. Also might be worth trying with bin_size=0
in the RasterizationSettings.
Is the floor made of a few very large faces? Playing with
cull_to_frustrum
might help. Also might be worth trying withbin_size=0
in the RasterizationSettings.
Yes, the floor is a square which made of two big triangles.
I don't think I can help further.
I don't think I can help further. Could you explain what 'play with cull_to_frustrum ' means? I noticed that this variable seems to only have True or False values. Thanks.
I solve this problem!!!!!!! Share my experience: 1)If you use the PerspectiveCamera and custom rotation matrix and translation matrix, it's different from the blender. In blender, the R is the three basis of camera coordinate, T is the location of the camera. But in pytorch3d, the [R, T] is the w2c,so the R is $R^T$, and T is $-R^T T$. 2)In pytorch3d,the matrix is on the right side, it mean $C = X*M$, M is matrix, so the matrix is raw list. If u use the custom rotation, u should feed the $R$ not $R^T$. 3)If the face can not render, the face is to large. subdevide face follow code:
mesh = Meshes(verts=[verts], faces=[faces], textures=textures)
# face太大,渲染不出来的,需要subdivide mesh才能把地面渲染出来。
subdivide_time = 3
for t in range(subdivide_time):
subdivide_mesh = SubdivideMeshes(mesh)
mesh, vertices_color = subdivide_mesh(mesh, feats=vertices_color)
textures = TexturesVertex(verts_features=vertices_color)
mesh.textures = textures
@bottler Thank you for the reminder!!
Hello, I have a mesh. There are floor, wall. However, when my camera ray is parallel with the floor, it can not render floor. For example:
Under the yellow box should be a green floor, but nothing is rendered.
I use the plotly to visual mesh: Could anyone can help? Thanks
Could it be a problem with the normal vectors? But I have already set cull_backfaces=False in RasterizationSettings.
My code: