Closed trinhnhut-gif closed 3 years ago
What kind of gpu do you have ? It seems that the problem comes from set_input
where the data are transferred from cpu to gpu. can you also perform print(data_s)
? also, is it working if you reduce the number of points? you can use the function FixedPoints
of torch_geometric
here
hi humanpose1,
I checked with my reduced point cloud data and it's worked perfectly ( the files are only 40 MB). If I use a big data, my work will have RuntimeError: CUDA out of memory. Besides, my gpu is only 2GB so can not work with a data over 100 MB. and I am trying to find a way to test with this data. so Could I use the function FixedPoints to solve this problem? to choice random points instead of all data to extract features? Thank you
You can use the fonction FixedPoints to reduce the number of points. The drawback is that it can decrease the performance. It depends on the scene.
Okay thank you so much. I will do on this way. Thanks.
Hi humanpose1
I have tested your code and its worked perfectly. but I had a bug CUDA out of memory when testing a file about 900 MB.
I also tried to reduce with
import torch import gc for obj in gc.get_objects(): try: if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): print(type(obj), obj.size()) except: pass
but still not working. Could you give me some idea about this problem? Thank you so much.
RuntimeError Traceback (most recent call last)