Thanks for sharing your great work. The proposed loss is very valuable for my work.
But I always got the OOM error when calculating the gt_unity_index_volume. The shape of gt_index_volume: torch.Size([24, 118, 12, 40]) and depth_gt_volume size: torch.Size([24, 118, 12, 40]). Do you know what's the reason for this error? The batch size and depth channel are too big?
https://github.com/prstrive/UniMVSNet/blob/7c1c9862cc6f2eb0b74c0569d58826d9416195b5/loss.py#L67
gt_unity_index_volume[gt_index_volume] = 1.0 - (depth_gt_volume[gt_index_volume] - depth_values[gt_index_volume]) / interval
RuntimeError: CUDA out of memory. Tried to allocate 573.65 GiB (GPU 0; 39.59 GiB total capacity; 2.81 GiB already allocated; 16.61 GiB free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Thanks for sharing your great work. The proposed loss is very valuable for my work. But I always got the OOM error when calculating the gt_unity_index_volume. The shape of gt_index_volume: torch.Size([24, 118, 12, 40]) and depth_gt_volume size: torch.Size([24, 118, 12, 40]). Do you know what's the reason for this error? The batch size and depth channel are too big? https://github.com/prstrive/UniMVSNet/blob/7c1c9862cc6f2eb0b74c0569d58826d9416195b5/loss.py#L67 gt_unity_index_volume[gt_index_volume] = 1.0 - (depth_gt_volume[gt_index_volume] - depth_values[gt_index_volume]) / interval RuntimeError: CUDA out of memory. Tried to allocate 573.65 GiB (GPU 0; 39.59 GiB total capacity; 2.81 GiB already allocated; 16.61 GiB free; 3.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF