humanpose1 / MS-SVConv

Compute descriptors for 3D point cloud registration using a multi scale sparse voxel architecture
MIT License
63 stars 7 forks source link

CUDA out of memory #5

Closed trinhnhut-gif closed 3 years ago

trinhnhut-gif commented 3 years ago

Hi humanpose1

I have tested your code and its worked perfectly. but I had a bug CUDA out of memory when testing a file about 900 MB.

I also tried to reduce with

import torch import gc for obj in gc.get_objects(): try: if torch.is_tensor(obj) or (hasattr(obj, 'data') and torch.is_tensor(obj.data)): print(type(obj), obj.size()) except: pass

but still not working. Could you give me some idea about this problem? Thank you so much.


RuntimeError Traceback (most recent call last)

in 3 # Compute the matches 4 with torch.no_grad(): ----> 5 model.set_input(data_s, "cuda") 6 feat_s = model.forward() 7 model.set_input(data_t, "cuda") ~/.local/lib/python3.8/site-packages/torch_points3d/models/registration/ms_svconv3d.py in set_input(self, data, device) 104 def set_input(self, data, device): 105 self.input, self.input_target = data.to_data() --> 106 self.input = self.input.to(device) 107 if getattr(data, "pos_target", None) is not None: 108 self.input_target = self.input_target.to(device) ~/.local/lib/python3.8/site-packages/torch_geometric/data/data.py in to(self, device, *keys, **kwargs) 338 If :obj:`*keys` is not given, the conversion is applied to all present 339 attributes.""" --> 340 return self.apply(lambda x: x.to(device, **kwargs), *keys) 341 342 def cpu(self, *keys): ~/.local/lib/python3.8/site-packages/torch_geometric/data/data.py in apply(self, func, *keys) 324 """ 325 for key, item in self(*keys): --> 326 self[key] = self.__apply__(item, func) 327 return self 328 ~/.local/lib/python3.8/site-packages/torch_geometric/data/data.py in __apply__(self, item, func) 303 def __apply__(self, item, func): 304 if torch.is_tensor(item): --> 305 return func(item) 306 elif isinstance(item, SparseTensor): 307 # Not all apply methods are supported for `SparseTensor`, e.g., ~/.local/lib/python3.8/site-packages/torch_geometric/data/data.py in (x) 338 If :obj:`*keys` is not given, the conversion is applied to all present 339 attributes.""" --> 340 return self.apply(lambda x: x.to(device, **kwargs), *keys) 341 342 def cpu(self, *keys): RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 1.95 GiB total capacity; 332.44 MiB already allocated; 11.81 MiB free; 350.00 MiB reserved in total by PyTorch)
humanpose1 commented 3 years ago

What kind of gpu do you have ? It seems that the problem comes from set_input where the data are transferred from cpu to gpu. can you also perform print(data_s) ? also, is it working if you reduce the number of points? you can use the function FixedPoints of torch_geometric here

trinhnhut-gif commented 3 years ago

hi humanpose1,

I checked with my reduced point cloud data and it's worked perfectly ( the files are only 40 MB). If I use a big data, my work will have RuntimeError: CUDA out of memory. Besides, my gpu is only 2GB so can not work with a data over 100 MB. and I am trying to find a way to test with this data. so Could I use the function FixedPoints to solve this problem? to choice random points instead of all data to extract features? Thank you

humanpose1 commented 3 years ago

You can use the fonction FixedPoints to reduce the number of points. The drawback is that it can decrease the performance. It depends on the scene.

trinhnhut-gif commented 3 years ago

Okay thank you so much. I will do on this way. Thanks.