Open jewettaij opened 3 years ago
Here's a workaround to address the slow computation time:
The computation time can be reduced by using a mesh with a smaller number of polygons. You can reduce the number of polygons in the mesh by:
This might result in a mesh with topological problems (such as holes or cavities), so it does not always work. To reduce the chance of these kinds of problems, you might want to smooth the mesh beforehand (for example by opening the PLY file in meshlab, and selecting the "Smoothing, Fairing and Deformation"->"HC Laplacian Smoothing" menu option).
Reducing the number of polygons in the mesh makes the "voxelize_mesh.py" program run faster, but it does not seem to reduce memory consumption. I still don't know how to solve that problem.
As of 2021-8-16, computers with terabytes of RAM can be rented from Amazon EC2 for about $13 per hour. (Learning how to use cloud services like EC2 is never a bad skill to have...)
I realize this isn't a very satisfying solution for most users.
The RAM required by this program is 25-100 times larger than the size of the original 3D image (tomogram) that we used to extract the surface mesh. This occurs because I am using 3rd party tools (pyvista, vtk) to handle the computation, instead of writing a new program from scratch. The computation is also slow. Unfortunately fixing this is not a priority for me yet -A 2020-12-15