Open Hokyjack opened 4 months ago
Here is the buldog example
Is this an issue with quality of stitching of instant-nsr-pl? can this be improved to get more detail?
hi, you can use this repos instead of instant-nsr-pl for more details. Specifically, you should replace the camera setting by following codes snippet.
def make_sparse_camera(cam_path, scale=4., device='cuda', mode='ortho'):
if mode == 'ortho':
ortho_scale = scale/2
projection = get_ortho_projection_matrix(-ortho_scale, ortho_scale, -ortho_scale, ortho_scale, 0.1, 100)
else:
npy_data = np.load(os.path.join(cam_path, f'{i:03d}.npy'), allow_pickle=True).item()
fov = npy_data['fov']
projection = get_perspective_projection_matrix(fov, aspect=1.0, near=0.1, far=100.0)
# projection = _projection(r=1/1.5, device=device, n=0.1, f=100)
w2c = []
for i in [0, 1, 2, 4, 6, 7]:
npy_data = np.load(os.path.join(cam_path, f'{i:03d}.npy'), allow_pickle=True).item()
w2c_cv = npy_data['extrinsic']
w2c_cv = np.concatenate([w2c_cv, np.array([[0, 0, 0, 1]])], axis=0)
c2w_cv = np.linalg.inv(w2c_cv)
c2w_gl = c2w_cv[[1, 2, 0, 3], :] # invert world coordinate, y->x, z->y, x->z
c2w_gl[:3, 1:3] *= -1 # opencv->opengl, flip y and z
w2c_gl = np.linalg.inv(c2w_gl)
w2c.append(w2c_gl)
w2c = torch.from_numpy(np.stack(w2c, 0)).float().to(device=device)
projection = torch.from_numpy(projection).float().to(device=device)
return w2c, projection
@Hokyjack Hello~ I saw your issue. Did you make success ? And could you share Dockerfile ?
Hi, firstly very nice repo! I got managed to run it on Windows + RTX4090 using Docker. I can share my Dockerfile for other users to run it more easily if you want.
However, I got one issue, the generated normals are looking very nice, but after running the Step 2 (instant-nsr-pl), the resulting OBJ mesh is kinda blury and not having much detail as shown in the normals in first step. Could you please help how to improve the result?
Thanks