Closed macrintr closed 8 years ago
The meshes were generated in a batch fashion for the full volumes by running marching cubes to obtain an initial mesh for each object and then running quadric error-constrained mesh simplification using the implementation in the OpenMesh library. I also experimented with using this same approach for generating meshes interactively through the Python backend, which is why you see the stub there. It worked, but the mesh generation wasn't quite as instantaneous in ~100 megavoxel volumes as would be desirable. Unfortunately the code for doing the mesh generation hasn't been released as open source yet -- I'll look into releasing it, though.
Cool - thanks! If you do open source the batch processing, that'd be awesome - otherwise, we'll put a script together for it. For our local use case, real-time meshing isn't as interesting.
The mesh generation implementation has just been released. While it is set up to be used with the Python in-memory volume serving, you could easily reuse it for batch generation of meshes as well. For volumes too large to fit in memory/mesh on a single machine, you can break up the volume into a grid of blocks of e.g. 1000^3 voxels. Make sure to specify lock_boundary_vertices = True and to have the blocks overlap by 1 voxel in all dimensions to prevent gaps.
We've got neuroglancer running on a local server, and we'd like to incorporate meshes. We noticed the
get_object_mesh
function here: https://github.com/google/neuroglancer/blob/master/python/neuroglancer/volume.py#L178. Do you have any advice on how to flesh it out? How did you include meshes in the demo? Thanks!