siavashk / pycpd

Pure Numpy Implementation of the Coherent Point Drift Algorithm
MIT License
513 stars 115 forks source link

MemoryError for relatively big point clouds #14

Closed ka-petrov closed 6 years ago

ka-petrov commented 6 years ago

I'm getting the following stack trace on an attempt to register 3D point clouds with about 40K points each:

  File "/home/theuser/anaconda3/lib/python3.5/site-packages/pycpd/affine_registration.py", line 24, in register
    self.initialize()
  File "/home/theuser/anaconda3/lib/python3.5/site-packages/pycpd/affine_registration.py", line 86, in initialize
    XX = np.tile(XX, (self.M, 1, 1))
  File "/home/theuser/anaconda3/lib/python3.5/site-packages/numpy/lib/shape_base.py", line 912, in tile
    c = c.reshape(-1, n).repeat(nrep, 0)
MemoryError

This happens both with rigid_registration and affine_registration. Should this algorithm be so memory consuming? How can I estimate the memory usage in 3D from the number of points? (I have 16 Gb RAM if it matters)

siavashk commented 6 years ago

The P matrix, calculated in the E-step, has M times N number of elements, where M and N are the size of fixed and moving point clouds, respectively.

This means that you are allocating (40K)^2 floating numbers. This is about 102 GB of memory.

ka-petrov commented 6 years ago

OK, thanks

hodaatef commented 5 years ago

i have the same problem , what can i do ?? any one solved it?? @imaginary-unit

ka-petrov commented 5 years ago

@hodaatef What I ended up doing is just using a random sub-sample of my point cloud, which fits in memory. That's the only solution I can think of.

siavashk commented 5 years ago

Random subsampling might be sufficient for you. Something better might be spatial subsampling: http://pointclouds.org/documentation/tutorials/voxel_grid.php#voxelgrid

That way you will be able to make sure that the subsampled point cloud has roughly the same spatial distribution as the original point cloud.

sandwich25 commented 4 years ago

@siavashk I did voxel downsampling from open3d but the point cloud is still too large and the memory error still occurs. Any insights on how I should proceed?

Thanks in advance.

siavashk commented 4 years ago

@sandwich25 do you have an estimate for how much your point cloud was downsampled? You can do this by computing _number_of_points_afterdownsampling / _number_of_points_beforedownsampling. I would suggest doing more downsampling until the issue goes away.