Open ishanic opened 2 years ago
Hello, is there a way to extract the learnt 3D volume from the grid data structure? How can the dense voxel grid be extracted to run something akin to Marching Cubes for mesh extraction?
Thanks
I'm trying to reconstruct 3D mesh from depth images rendered by the model (360° dataset playground), however, I find the point cloud reconstructed by multiple depth images is not good.
I used the following to extract a mesh but I'm not sure if it is doing the right thing? Maybe someone can comment if I'm doing something wrong?
import mcubes
import svox2
grid = svox2.SparseGrid.load("../ckpt/llff_c2f_fasttv_10e/room/ckpt.npz")
v, t = mcubes.marching_cubes(grid.links.numpy(),0)
mcubes.export_obj(v, t, 'room.obj')
@pavan4 while your code does produce a mesh it doesn't appear to be using the correct density data.
It took me a while to get to the correct data. It's neither very pretty nor very fast but it gets the job done. Maybe someone with good numpy/torch skills can clean it up. Could make a nice tool for extracting and previewing meshes during training.
@ishanic The script below can be run from the command line with one argument pointing to the checkpoint you want to convert. Playing with the resfactor and the marching_cube parameter can help to get the model you want.
import svox2, mcubes, torch, numpy, argparse
parser = argparse.ArgumentParser()
parser.add_argument('ckpt', type=str)
args = parser.parse_args()
targetpath = args.ckpt[:-3]+"obj"
print("result will be saved to:" , targetpath)
print("loading sparse grid")
grid = svox2.SparseGrid.load(args.ckpt)
resfactor = 1 #increase/decrease to get higher or lower resolution meshes. Be very careful when increasing.
#0.3 is nice for previewing the rough stuff
resx = int(grid.shape[0]*resfactor)
resy = int(grid.shape[1]*resfactor)
resz = int(grid.shape[2]*resfactor)
densitygrid = numpy.zeros((resx,resy,resz))
print("converting densities to non-sparse numpy array")
#not exactly fast, but not excrutiating slow either.
for x in range(resx):
print("Progress:%i %%"%(100*x/resx)) #jo dawg i heard u like % in your progress
xvals = [-0.5+x/resx]*resx # an array with x-size size and filled with current x value, by default ranges -.5 to .49 something
xvals = numpy.array(xvals)
for y in range(resy):
yvals = [-0.5+y/resy]*resy # an array with y-size size and filled with current y value
yvals = numpy.array(yvals)
zvals = numpy.array(range(0,resz))/resz
zvals = zvals-0.5
samplepos = numpy.dstack([xvals,yvals,zvals])[0] #get a full z-line at x,y coords worth of 3d coordinates.
u = grid.sample(torch.Tensor(samplepos),want_colors=False)[0] #only get densities, no colors
v = u.detach().numpy() #turn tensor into numpy array.
vt = v.T[0] #turn array from shape (128,1) to (1,128)
densitygrid[x,y] = vt #feed the extracted line into our density array
#densitygrid[densitygrid<0.05]=0 #not really needed but if you want to clip values, you can do it here.
##uncomment to get an image of the densities across a slice of your volume.
#from matplotlib import pyplot as plt
#plt.imshow( densitygrid[int(resz/2)], interpolation='nearest')
#plt.show()
#finally get your object.
v,t = mcubes.marching_cubes(densitygrid,20) #adjust value to your scene. start with 0.
mcubes.export_obj(v,t,targetpath)
@ThomasEgi how can I add colors to the grid? please help me, thanks!
@HtForBetterLife colors are more tricky. you can start by sampling the volume at the vertex positions but this will not get you an accurate color. The color is build up by raytraycing along a line of sight. So if you want an accurate color you pretty much have to shoot a short ray just above the object surface into the surface until you have summed up enough density to make the coloring opaque (or half transparent if you set a depth limit). So yeah, technically possible but not a one-liner I can just implement and paste. So i guess the steps would be:
@HtForBetterLife colors are more tricky. you can start by sampling the volume at the vertex positions but this will not get you an accurate color. The color is build up by raytraycing along a line of sight. So if you want an accurate color you pretty much have to shoot a short ray just above the object surface into the surface until you have summed up enough density to make the coloring opaque (or half transparent if you set a depth limit). So yeah, technically possible but not a one-liner I can just implement and paste. So i guess the steps would be:
- calculate surface normals of your mesh.
- calculate starting and endpoints of your rays based on vertices and their normals.
- actually render the rays into colors and assign them to the vertices.
- export.
The solution helps a lot, I'll try it. Thank you so much!
@pavan4虽然您的代码确实生成了网格,但它似乎没有使用正确的密度数据。
我花了一段时间才得到正确的数据。它既不漂亮也不快,但它可以完成工作。也许具有良好数字/火炬技能的人可以清理它。可以成为在训练期间提取和预览网格的好工具。
@ishanic下面的脚本可以从命令行运行,其中一个参数指向要转换的检查点。使用重构和marching_cube参数可以帮助获得所需的模型。
import svox2, mcubes, torch, numpy, argparse parser = argparse.ArgumentParser() parser.add_argument('ckpt', type=str) args = parser.parse_args() targetpath = args.ckpt[:-3]+"obj" print("result will be saved to:" , targetpath) print("loading sparse grid") grid = svox2.SparseGrid.load(args.ckpt) resfactor = 1 #increase/decrease to get higher or lower resolution meshes. Be very careful when increasing. #0.3 is nice for previewing the rough stuff resx = int(grid.shape[0]*resfactor) resy = int(grid.shape[1]*resfactor) resz = int(grid.shape[2]*resfactor) densitygrid = numpy.zeros((resx,resy,resz)) print("converting densities to non-sparse numpy array") #not exactly fast, but not excrutiating slow either. for x in range(resx): print("Progress:%i %%"%(100*x/resx)) #jo dawg i heard u like % in your progress xvals = [-0.5+x/resx]*resx # an array with x-size size and filled with current x value, by default ranges -.5 to .49 something xvals = numpy.array(xvals) for y in range(resy): yvals = [-0.5+y/resy]*resy # an array with y-size size and filled with current y value yvals = numpy.array(yvals) zvals = numpy.array(range(0,resz))/resz zvals = zvals-0.5 samplepos = numpy.dstack([xvals,yvals,zvals])[0] #get a full z-line at x,y coords worth of 3d coordinates. u = grid.sample(torch.Tensor(samplepos),want_colors=False)[0] #only get densities, no colors v = u.detach().numpy() #turn tensor into numpy array. vt = v.T[0] #turn array from shape (128,1) to (1,128) densitygrid[x,y] = vt #feed the extracted line into our density array #densitygrid[densitygrid<0.05]=0 #not really needed but if you want to clip values, you can do it here. ##uncomment to get an image of the densities across a slice of your volume. #from matplotlib import pyplot as plt #plt.imshow( densitygrid[int(resz/2)], interpolation='nearest') #plt.show() #finally get your object. v,t = mcubes.marching_cubes(densitygrid,20) #adjust value to your scene. start with 0. mcubes.export_obj(v,t,targetpath)
Hello, I'd like to ask about the alignment of point cloud coordinates and grid coordinates. I use self.world2grid to convert the points.npy create from proc_colmap.sh. I think these transformations meet the requirements of this code and point seem good,but the alignment is wrong, do I miss something?
@ThomasEgi Hi! your script helped a lot in generating the mesh, but my meshes are clipped/croped. I have tried changing the res_factor and threshold value but it does not help. What do I need to change in the script?
@ThomasEgi Hi! your script helped a lot in generating the mesh, but my meshes are clipped/croped. I have tried changing the res_factor and threshold value but it does not help. What do I need to change in the script?
i got the same result using this script. how to solve this problem?
@ThomasEgi Hi! your script helped a lot in generating the mesh, but my meshes are clipped/croped. I have tried changing the res_factor and threshold value but it does not help. What do I need to change in the script?
i got the same result using this script. how to solve this problem?
Try to change resfactor variable and second parameter of marching_cubes function as described in comments in code. Also try to experiment with distance to the object and image resolution.
@ThomasEgi Hi! your script helped a lot in generating the mesh, but my meshes are clipped/croped. I have tried changing the res_factor and threshold value but it does not help. What do I need to change in the script?
It could be solved by modifying the scale of coords only, the changed code is as follows:
import svox2, mcubes, torch, numpy, argparse
parser = argparse.ArgumentParser()
parser.add_argument('--ckpt', default=r"D:\NeRF_Net\svox2-master\opt\ckpt_lego\ckpt.npz", type=str)
args = parser.parse_args()
targetpath = args.ckpt[:-3]+"obj"
print("result will be saved to:" , targetpath)
print("loading sparse grid")
grid = svox2.SparseGrid.load(args.ckpt)
resfactor = 1 #increase/decrease to get higher or lower resolution meshes. Be very careful when increasing.
#0.3 is nice for previewing the rough stuff
resx = int(grid.shape[0]*resfactor)
resy = int(grid.shape[1]*resfactor)
resz = int(grid.shape[2]*resfactor)
densitygrid = numpy.zeros((resx,resy,resz))
print("converting densities to non-sparse numpy array")
#not exactly fast, but not excrutiating slow either.
for x in range(resx):
print("Progress:%i %%"%(100*x/resx)) #jo dawg i heard u like % in your progress
xvals = [(-0.5+x/resx)*2]*resx # an array with x-size size and filled with current x value, by default ranges -.5 to .49 something
xvals = numpy.array(xvals)
for y in range(resy):
yvals = [(-0.5+y/resy)*2]*resy # an array with y-size size and filled with current y value
yvals = numpy.array(yvals)
zvals = numpy.array(range(0,resz))/resz
zvals = (zvals-0.5)*2
samplepos = numpy.dstack([xvals,yvals,zvals])[0] #get a full z-line at x,y coords worth of 3d coordinates.
u = grid.sample(torch.Tensor(samplepos),want_colors=False)[0] #only get densities, no colors
v = u.detach().numpy() #turn tensor into numpy array.
vt = v.T[0] #turn array from shape (128,1) to (1,128)
densitygrid[x,y] = vt #feed the extracted line into our density array
#densitygrid[densitygrid<0.05]=0 #not really needed but if you want to clip values, you can do it here.
##uncomment to get an image of the densities across a slice of your volume.
#from matplotlib import pyplot as plt
#plt.imshow( densitygrid[int(resz/2)], interpolation='nearest')
#plt.show()
#finally get your object.
v,t = mcubes.marching_cubes(densitygrid,20) #adjust value to your scene. start with 0.
mcubes.export_obj(v,t,targetpath)
@pavan4 while your code does produce a mesh it doesn't appear to be using the correct density data.
It took me a while to get to the correct data. It's neither very pretty nor very fast but it gets the job done. Maybe someone with good numpy/torch skills can clean it up. Could make a nice tool for extracting and previewing meshes during training.
@ishanic The script below can be run from the command line with one argument pointing to the checkpoint you want to convert. Playing with the resfactor and the marching_cube parameter can help to get the model you want.
import svox2, mcubes, torch, numpy, argparse parser = argparse.ArgumentParser() parser.add_argument('ckpt', type=str) args = parser.parse_args() targetpath = args.ckpt[:-3]+"obj" print("result will be saved to:" , targetpath) print("loading sparse grid") grid = svox2.SparseGrid.load(args.ckpt) resfactor = 1 #increase/decrease to get higher or lower resolution meshes. Be very careful when increasing. #0.3 is nice for previewing the rough stuff resx = int(grid.shape[0]*resfactor) resy = int(grid.shape[1]*resfactor) resz = int(grid.shape[2]*resfactor) densitygrid = numpy.zeros((resx,resy,resz)) print("converting densities to non-sparse numpy array") #not exactly fast, but not excrutiating slow either. for x in range(resx): print("Progress:%i %%"%(100*x/resx)) #jo dawg i heard u like % in your progress xvals = [-0.5+x/resx]*resx # an array with x-size size and filled with current x value, by default ranges -.5 to .49 something xvals = numpy.array(xvals) for y in range(resy): yvals = [-0.5+y/resy]*resy # an array with y-size size and filled with current y value yvals = numpy.array(yvals) zvals = numpy.array(range(0,resz))/resz zvals = zvals-0.5 samplepos = numpy.dstack([xvals,yvals,zvals])[0] #get a full z-line at x,y coords worth of 3d coordinates. u = grid.sample(torch.Tensor(samplepos),want_colors=False)[0] #only get densities, no colors v = u.detach().numpy() #turn tensor into numpy array. vt = v.T[0] #turn array from shape (128,1) to (1,128) densitygrid[x,y] = vt #feed the extracted line into our density array #densitygrid[densitygrid<0.05]=0 #not really needed but if you want to clip values, you can do it here. ##uncomment to get an image of the densities across a slice of your volume. #from matplotlib import pyplot as plt #plt.imshow( densitygrid[int(resz/2)], interpolation='nearest') #plt.show() #finally get your object. v,t = mcubes.marching_cubes(densitygrid,20) #adjust value to your scene. start with 0. mcubes.export_obj(v,t,targetpath)
Thanks for your works. I made a few changes to get the mesh faster.
import svox2, mcubes, torch, numpy, argparse
import numpy as np
from tqdm import trange
def generate_mesh(resolution):
"""Generate a mesh grid within (-1, 1) range."""
range_x = np.linspace(-1, 1, resolution)
range_y = np.linspace(-1, 1, resolution)
range_z = np.linspace(-1, 1, resolution)
mesh_x, mesh_y, mesh_z = np.meshgrid(range_x, range_y, range_z)
return np.vstack((mesh_x.flatten(), mesh_y.flatten(), mesh_z.flatten())).T.astype(np.float32)
parser = argparse.ArgumentParser()
parser.add_argument('--ckpt', default="./ckpt/ckpt.npz", type=str)
args = parser.parse_args()
targetpath = args.ckpt[:-3]+"obj"
print("result will be saved to:" , targetpath)
print("loading sparse grid")
grid = svox2.SparseGrid.load(args.ckpt)
reso = 512
grid_coords = generate_mesh(reso)
grid_coords = grid_coords.reshape(8, -1, 3) #for save cpu memory
density_grid = torch.concat([grid.sample(torch.Tensor(grid_coords[idx]),want_colors=False)[0] for idx in trange(grid_coords.__len__())], dim=0)
density_grid= density_grid.view(reso, reso, reso).detach().numpy()
v,t = mcubes.marching_cubes(density_grid,20) #adjust value to your scene. start with 0.
mcubes.export_obj(v,t,targetpath)
Hello, is there a way to extract the learnt 3D volume from the grid data structure? How can the dense voxel grid be extracted to run something akin to Marching Cubes for mesh extraction?
Thanks