cnr-isti-vclab / meshlab

The open source mesh processing system
http://www.meshlab.net
GNU General Public License v3.0
4.79k stars 826 forks source link

Uniform mesh resampling using 25gb+ memory, Meshlab 2016 only using 1gb #1114

Open Crofttime opened 3 years ago

Crofttime commented 3 years ago

When I try to offset a mesh (attached) using Pymeshlab/Uniform Mesh Resampling it approaches 97% memory (32gb ram, sometimes it makes my computer lock up). I tried it with Meshlab 2021.07 and it did the same thing, however when I try it with meshlab 2016 it only uses about 1gb of memory. This is on Windows 10, using the task manager to monitor memory usage.

import pymeshlab as ml
ms = ml.MeshSet()

ms.load_new_mesh("_offsetmeshes.stl")

ms.apply_filter("uniform_mesh_resampling",cellsize=0.2500,offset=0.5000)
ms.apply_filter("remove_isolated_pieces_wrt_diameter",mincomponentdiag=ml.Percentage(5))

ms.save_current_mesh("_offsetmesh.ply")

_offsetmeshes.zip

alemuntoni commented 3 years ago

This issue is affecting meshlab before 2020.05. Thanks for reporting, I'll investigate.

PhilNad commented 3 years ago

I had a similar issue with the uniform_mesh_resampling filter. I built the latest version of MeshLab but the issue was still present. Using pymeshlab.Percentage() to specify the offset of the filter fixed it. In the documentation, the type of the parameter is indeed correct but it might be a good idea to make it more obvious for readers or to output an error in case a float/int is used instead of a pymeshlab.Percentage.

This worked for me: ms.uniform_mesh_resampling(offset=ml.Percentage(55), absdist=True)

alemuntoni commented 3 years ago

@PhilNad

but it might be a good idea to make it more obvious for readers or to output an error in case a float/int is used instead of a pymeshlab.Percentage

will be in the way you described in the next version of pymeshlab: Percentage arguments can be only of type Percentage or of the newly created type AbsoluteValue (just a wrapper of float). floats won't be accepted anymore, forcing the user to double check the code and the documentation :)

Anyway, the bug about the Uniform mesh resampling filter still persists: for some parameter values, the algorithm starts to create KdTree nodes in an infinite loop (and for that reason it uses all the available RAM until it suddenly crashes).

Crofttime commented 3 years ago

Thanks for the info! I have a work around using meshlab 2016 for now. Also I've been using it to create 1mm thick meshes for 3d printing so I almost always use absolute distances. However it will be nice to explicitly define it. Thanks again for all the great work on this project!

featherJ commented 2 years ago

@alemuntoni

will be in the way you described in the next version of pymeshlab: Percentage arguments can be only of type Percentage or of the newly created type AbsoluteValue (just a wrapper of float). floats won't be accepted anymore, forcing the user to double check the code and the documentation :)

Anyway, the bug about the Uniform mesh resampling filter still persists: for some parameter values, the algorithm starts to create KdTree nodes in an infinite loop (and for that reason it uses all the available RAM until it suddenly crashes).

I found that it should not be an infinite loop. In kdtree_face.h, there is also a limit of targetMaxDepth = 64. The problem may be in the following code:

unsigned int state = 0;
FacePointer fp = parent.list[i];
for (int j = 0; j < 3; j++)
{
    if (fp->P(j)[dim] < parent.splitValue)
    state |= (1 << 0);
    else if (fp->P(j)[dim] > parent.splitValue)
    state |= (1 << 1);
    else
    {
    state |= (1 << 0);
    state |= (1 << 1);
    }
}
if (state & (1 << 0))
{
    leftChild.list.push_back(fp);
    leftChild.aabb.Add(fp->P(0));
    leftChild.aabb.Add(fp->P(1));
    leftChild.aabb.Add(fp->P(2));
}
if (state & (1 << 1))
{
    rightChild.list.push_back(fp);
    rightChild.aabb.Add(fp->P(0));
    rightChild.aabb.Add(fp->P(1));
    rightChild.aabb.Add(fp->P(2));
}

As a result, the length of the list will not be less than the targetCellSize, which can only depend on the targetMaxDepth limit. And the 64 depth tree is also quite large. So the program is always in the process of tree creation.

One way I can think of at the moment is to reduce the value of targetMaxDepth to avoid the program crashing. But this doesn't seem to be the best solution.

jjpr commented 1 year ago

@alemuntoni

for some parameter values, the algorithm starts to create KdTree nodes in an infinite loop

Can you give any guidelines about what parameters trigger this behavior? I've tried a number of little tweaks to the parameters I'm using, but everything I've tried rapidly balloons to use 75 GB of memory (the machine only has 64GB, it's a Mac and does the dynamic "memory pressure" thing, but obviously it's swapping).