Open fnardmann opened 3 years ago
I am having a similar issue. Were you able to find a solution ? I would appreciate it if you share it here if you found a way to combat this. Thanks in advance.
Unfortunately I have not found a solution for this. But maybe you could use tf.py_function to build a kd-tree or something like that in plain python.
As an alternative graph-based model I used GACNet and got very good results with it!
I plugged the DGCNN model into my semantic segmentation framework in which I use other models like PointNet or PointNet++ without problems. At training time everything is fine and I get pretty good accuracies for my Airborne LiDAR data (here I randomly sample 8192 points for each tile so everything is good). However at test time I want to predict all points inside one tile and I get a memory error for a tile with more than 50000 points.
Aborted (core dumped)
I guess the problem is in the
pairwise_distance
function. This function calculates a adjacency matrix and I think my gpu memory cant handle an array with the shape of 50000 x 50000. I understand that thetf.matmul
function is very fast on gpu but I would like to try a workaround which purely calculates the k nearest neighbors without this huge memory overhead. Is there anything like this? I know how to use a KDTree in normal python but I have not found a way yet to use it with tensorflow placeholders...I know I can work around this problem by using smaller tiles or downsample my point clouds but I really would like to fix this internally...
I list some basic information about my implementation here:
Thanks in advance for any tips!