WangYueFt / dgcnn

MIT License
1.62k stars 420 forks source link

Aborted (core dumped) if I process to many points at once #79

Open fnardmann opened 3 years ago

fnardmann commented 3 years ago

I plugged the DGCNN model into my semantic segmentation framework in which I use other models like PointNet or PointNet++ without problems. At training time everything is fine and I get pretty good accuracies for my Airborne LiDAR data (here I randomly sample 8192 points for each tile so everything is good). However at test time I want to predict all points inside one tile and I get a memory error for a tile with more than 50000 points.

Aborted (core dumped)

I guess the problem is in the pairwise_distance function. This function calculates a adjacency matrix and I think my gpu memory cant handle an array with the shape of 50000 x 50000. I understand that the tf.matmul function is very fast on gpu but I would like to try a workaround which purely calculates the k nearest neighbors without this huge memory overhead. Is there anything like this? I know how to use a KDTree in normal python but I have not found a way yet to use it with tensorflow placeholders...

I know I can work around this problem by using smaller tiles or downsample my point clouds but I really would like to fix this internally...

I list some basic information about my implementation here:

Thanks in advance for any tips!

nazmicancalik commented 2 years ago

I am having a similar issue. Were you able to find a solution ? I would appreciate it if you share it here if you found a way to combat this. Thanks in advance.

fnardmann commented 2 years ago

Unfortunately I have not found a solution for this. But maybe you could use tf.py_function to build a kd-tree or something like that in plain python.

As an alternative graph-based model I used GACNet and got very good results with it!