I am using the challenge branch to run inference this project.
But there is an error - Cuda out of memory.
I searched many articles on google but can't resolved.
my GPU is NVIDIA GeForce RTX 3050 Ti Laptop GPU. and this is the error log.
Processing: 0 : D:/ToothGroupNetwork-challenge_branch/input_path\0EJBIPTC\0EJBIPTC_lower.obj
CUDA out of memory. Tried to allocate 972.00 MiB (GPU 0; 4.00 GiB total capacity; 3.24 GiB already allocated; 0 bytes free; 3.28 GiB reserved in
total by PyTorch)
Traceback (most recent call last):
File "D:\ToothGroupNetwork-challenge_branch\predict_utils.py", line 101, in predict
pred_result = self.chl_pipeline(scan_path)
File "D:\ToothGroupNetwork-challenge_branch\inference_pipeline_final.py", line 212, in call
bdl_results = self.get_bdl_module_results(input_cuda_bdl_feats, sampled_boundary_seg_label, self.bdl_module)
File "D:\ToothGroupNetwork-challenge_branch\inference_pipeline_final.py", line 720, in get_bdl_module_results
output = base_model([points, sampled_boundary_seg_label], test=True)
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, kwargs)
File "D:\ToothGroupNetwork-challenge_branch\models\tf_cbl_two_step_half_num_model.py", line 182, in forward
sem_2, offset_2, mask2, = self.second_ins_cent_model([crop_input_features])
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\cbl_point_transformer_module.py", line 124, in forward
p1, x1, o1 = self.enc1([p0, x0, o0])
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(input, kwargs)
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\container.py", line 124, in forward
input = module(input)
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(*input, *kwargs)
File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\blocks.py", line 132, in forward
x = self.relu(self.bn2(self.transformer2([p, x, o]))) # - seems like trans/convolute - bn - relu
File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl
result = self.forward(input, **kwargs)
File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\blocks.py", line 40, in forward
w = x_k - x_q.unsqueeze(1) + p_r.view(p_r.shape[0], p_r.shape[1], self.out_planes // self.mid_planes, self.mid_planes).sum(2) # (n, nsample, c)
RuntimeError: CUDA out of memory. Tried to allocate 972.00 MiB (GPU 0; 4.00 GiB total capacity; 3.24 GiB already allocated; 0 bytes free; 3.28 GiB reserved in total by PyTorch)
Backend TkAgg is interactive backend. Turning interactive mode on.
I am using the challenge branch to run inference this project. But there is an error - Cuda out of memory. I searched many articles on google but can't resolved.
my GPU is NVIDIA GeForce RTX 3050 Ti Laptop GPU. and this is the error log.
Processing: 0 : D:/ToothGroupNetwork-challenge_branch/input_path\0EJBIPTC\0EJBIPTC_lower.obj CUDA out of memory. Tried to allocate 972.00 MiB (GPU 0; 4.00 GiB total capacity; 3.24 GiB already allocated; 0 bytes free; 3.28 GiB reserved in total by PyTorch) Traceback (most recent call last): File "D:\ToothGroupNetwork-challenge_branch\predict_utils.py", line 101, in predict pred_result = self.chl_pipeline(scan_path) File "D:\ToothGroupNetwork-challenge_branch\inference_pipeline_final.py", line 212, in call bdl_results = self.get_bdl_module_results(input_cuda_bdl_feats, sampled_boundary_seg_label, self.bdl_module) File "D:\ToothGroupNetwork-challenge_branch\inference_pipeline_final.py", line 720, in get_bdl_module_results output = base_model([points, sampled_boundary_seg_label], test=True) File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, kwargs) File "D:\ToothGroupNetwork-challenge_branch\models\tf_cbl_two_step_half_num_model.py", line 182, in forward sem_2, offset_2, mask2, = self.second_ins_cent_model([crop_input_features]) File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\cbl_point_transformer_module.py", line 124, in forward
p1, x1, o1 = self.enc1([p0, x0, o0]) File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, kwargs) File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\container.py", line 124, in forward input = module(input) File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, *kwargs) File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\blocks.py", line 132, in forward x = self.relu(self.bn2(self.transformer2([p, x, o]))) # - seems like trans/convolute - bn - relu File "C:\Users\Lee\AppData\Local\Programs\Python\Python39\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(input, **kwargs) File "D:\ToothGroupNetwork-challenge_branch\models\modules\cbl_point_transformer\blocks.py", line 40, in forward w = x_k - x_q.unsqueeze(1) + p_r.view(p_r.shape[0], p_r.shape[1], self.out_planes // self.mid_planes, self.mid_planes).sum(2) # (n, nsample, c) RuntimeError: CUDA out of memory. Tried to allocate 972.00 MiB (GPU 0; 4.00 GiB total capacity; 3.24 GiB already allocated; 0 bytes free; 3.28 GiB reserved in total by PyTorch)
Backend TkAgg is interactive backend. Turning interactive mode on.
may i have some mistakes, can anybody help me?