minghanqin / LangSplat

Official implementation of the paper "LangSplat: 3D Language Gaussian Splatting" [CVPR2024 Highlight]
https://langsplat.github.io/
Other
674 stars 72 forks source link

RuntimeError: CUDA out of memory. #63

Open kae1111 opened 3 months ago

kae1111 commented 3 months ago

when i run'sh eval.sh ' it show : ~/LangSplat/eval$ sh eval.sh ModuleList( (0): Linear(in_features=512, out_features=256, bias=True) (1): BatchNorm1d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (2): ReLU() (3): Linear(in_features=256, out_features=128, bias=True) (4): BatchNorm1d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (5): ReLU() (6): Linear(in_features=128, out_features=64, bias=True) (7): BatchNorm1d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (8): ReLU() (9): Linear(in_features=64, out_features=32, bias=True) (10): BatchNorm1d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (11): ReLU() (12): Linear(in_features=32, out_features=3, bias=True) ) ModuleList( (0): Linear(in_features=3, out_features=16, bias=True) (1): ReLU() (2): Linear(in_features=16, out_features=32, bias=True) (3): ReLU() (4): Linear(in_features=32, out_features=64, bias=True) (5): ReLU() (6): Linear(in_features=64, out_features=128, bias=True) (7): ReLU() (8): Linear(in_features=128, out_features=256, bias=True) (9): ReLU() (10): Linear(in_features=256, out_features=256, bias=True) (11): ReLU() (12): Linear(in_features=256, out_features=512, bias=True) ) 0%| | 0/6 [00:00<?, ?it/s] Traceback (most recent call last): File "evaluate_iou_loc.py", line 339, in evaluate(feat_dir, output_path, ae_ckpt_path, json_folder, mask_thresh, args.encoder_dims, args.decoder_dims, logger) File "evaluate_iou_loc.py", line 260, in evaluate restored_feat = model.decode(sem_feat.flatten(0, 2)) File "../autoencoder/model.py", line 45, in decode x = x / x.norm(dim=-1, keepdim=True) RuntimeError: CUDA out of memory. Tried to allocate 4.13 GiB (GPU 0; 10.75 GiB total capacity; 4.50 GiB already allocated; 2.92 GiB free; 6.58 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

any help?thanks! GPU:2080ti