YihangChen-ee / HAC

:house: [ECCV 2024] Pytorch implementation of 'HAC: Hash-grid Assisted Context for 3D Gaussian Splatting Compression'
Other
197 stars 11 forks source link

CUDA out of memory during rendering process #9

Open wcjj1236 opened 3 months ago

wcjj1236 commented 3 months ago

Hey do you know how to deal with CUDA out of memory during rendering process?

encoding_param_num=1766272, size=0.2105560302734375MB. [08/06 19:57:10] Reading camera 251/251 [08/06 19:57:11] start fetching data from ply file [08/06 19:57:11] Loading Training Cameras [08/06 19:57:11] Loading Test Cameras [08/06 19:57:15] Initial voxel_size: 0.01 [08/06 19:57:16] Number of points at initialisation : 114293 [08/06 19:57:16] anchor_bound_updated [08/06 19:57:16] Training progress: 100%|███████████████████████████████████████████| 1000/1000 [01:00<00:00, 16.49it/s, Loss=0.0915876] 2024-06-08 19:58:17,128 - INFO: [ITER 1000] Saving Gaussians 2024-06-08 19:58:18,246 - INFO: Total Training time: 60.38757371902466 2024-06-08 19:58:18,351 - INFO: Training complete. 2024-06-08 19:58:18,351 - INFO: Starting Rendering~ hash_params: True 4 13 (18, 24, 33, 44, 59, 80, 108, 148, 201, 275, 376, 514) 15 (130, 258, 514, 1026) True False False [08/06 19:58:18] encoding_param_num=1766272, size=0.2105560302734375MB. [08/06 19:58:18] Loading trained model at iteration 1000 [08/06 19:58:18] Reading camera 251/251 [08/06 19:58:19] start fetching data from ply file [08/06 19:58:19] Loading Training Cameras [08/06 19:58:19] Loading Test Cameras [08/06 19:58:22] Rendering progress: 0%| | 0/32 [00:01<?, ?it/s] Traceback (most recent call last): File "train.py", line 669, in <module> visible_count = render_sets(args, lp.extract(args), -1, pp.extract(args), wandb=wandb, logger=logger, x_bound_min=x_bound_min, x_bound_max=x_bound_max) File "train.py", line 472, in render_sets t_test_list, visible_count = render_set(dataset.model_path, "test", scene.loaded_iter, scene.getTestCameras(), gaussians, pipeline, background) File "train.py", line 403, in render_set render_pkg = render(view, gaussians, pipeline, background, visible_mask=voxel_visible_mask) File "C:\Users\jay\Desktop\HAC\gaussian_renderer\__init__.py", line 225, in render cov3D_precomp = None) File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\torch\nn\modules\module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 222, in forward raster_settings, File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 41, in rasterize_gaussians raster_settings, File "C:\Users\jay\AppData\Local\anaconda3\envs\HAC_env\lib\site-packages\diff_gaussian_rasterization\__init__.py", line 92, in forward num_rendered, color, radii, geomBuffer, binningBuffer, imgBuffer = _C.rasterize_gaussians(*args) RuntimeError: CUDA out of memory. Tried to allocate 8.54 GiB (GPU 0; 8.00 GiB total capacity; 147.41 MiB already allocated; 5.60 GiB free; 264.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

YihangChen-ee commented 3 months ago

Hi, could you please check your GPU type? For reference, we use 4090 GPU which has a memory of 24 GB.

BTW, OOM is also related to the scene scale.

wcjj1236 commented 3 months ago

Thank you for your information. My computer is only equipped with a 2070 GPU. I can eventually run the code by optimizing GPU resource usage.

Chenjunjie Wang

YihangChen @.***> 于2024年6月8日周六 20:49写道:

Hi, could you please check your GPU type? For reference, we use 4090 GPU which has a memory of 24 GB.

BTW, OOM is also related to the scene scale.

— Reply to this email directly, view it on GitHub https://github.com/YihangChen-ee/HAC/issues/9#issuecomment-2156300800, or unsubscribe https://github.com/notifications/unsubscribe-auth/A22ERQEXYIRBZXNWF2KUUILZGPGFTAVCNFSM6AAAAABJAP2LKKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNJWGMYDAOBQGA . You are receiving this because you authored the thread.Message ID: @.***>