graphdeco-inria / gaussian-splatting

Original reference implementation of "3D Gaussian Splatting for Real-Time Radiance Field Rendering"
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
Other
14.54k stars 1.9k forks source link

Error rtx 4080, Expected 4-dimensional input for 4-dimensional weight but got 3-dimensional input of size #320

Open aphixe opened 1 year ago

aphixe commented 1 year ago

C:\Users\aphixe\anaconda3\envs\gaussian_splatting\lib\site-packages\torch\cuda__init__.py:106: UserWarning: NVIDIA GeForce RTX 4080 with CUDA capability sm_89 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_61 sm_70 sm_75 compute_37. If you want to use the NVIDIA GeForce RTX 4080 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name)) Output folder: ./output/69948d25-2 [14/10 18:51:11] Tensorboard not available: not logging progress [14/10 18:51:11] Reading camera 11/11 [14/10 18:51:11] Converting point3d.bin to .ply, will happen only the first time you open the scene. [14/10 18:51:11] Loading Training Cameras [14/10 18:51:11] Loading Test Cameras [14/10 19:05:40] Number of points at initialisation : 8040 [14/10 19:05:40] Training progress: 0%| | 0/20000 [00:00<?, ?it/s]Traceback (most recent call last): File "train.py", line 216, in training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from) File "train.py", line 89, in training loss = (1.0 - opt.lambda_dssim) Ll1 + opt.lambda_dssim (1.0 - ssim(image, gt_image)) File "E:\Documents\gaussian-splatting\utils\loss_utils.py", line 41, in ssim return _ssim(img1, img2, window, window_size, channel, size_average) File "E:\Documents\gaussian-splatting\utils\loss_utils.py", line 44, in _ssim mu1 = F.conv2d(img1, window, padding=window_size // 2, groups=channel) RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 610, 1402] instead Training progress: 0%|

Moult commented 1 year ago

Running into the same problem with NVIDIA GeForce GTX 1650:

nvidia-smi                                                                                                                                                                                                                   :) 
Tue Oct 17 16:00:30 2023       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 535.113.01             Driver Version: 535.113.01   CUDA Version: 12.2     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1650        Off | 00000000:01:00.0  On |                  N/A |
| 24%   40C    P8              N/A /  75W |   1009MiB /  4096MiB |      1%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
17373355 commented 1 year ago

Running into the same bug here with NVIDIA GTX 166...

grgkopanas commented 1 year ago

Can you print the shapes of the tensors (both the rendered image and the ground truth) before feeding them in ssim?

On Mon, Oct 23, 2023, 09:43 17373355 @.***> wrote:

Running into the same bug here with NVIDIA GTX 166...

— Reply to this email directly, view it on GitHub https://github.com/graphdeco-inria/gaussian-splatting/issues/320#issuecomment-1774605734, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACGXXYPCRGJESRK3QTM5G6DYAYN2HAVCNFSM6AAAAAA6AVEQA6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZUGYYDKNZTGQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

17373355 commented 1 year ago

Edit: Just for reference. I was too excited then so I said "solved." The project can run smoothly in my environment, aka a ThinkStation. My GPU only supports very small datasets. I guess this might not solve the cases using RTX 40xx series...

My initial error report: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 1905, 1073] instead

Solution: Adding the following codes to loss_util.py def _ssim(img1, img2, window, window_size, channel, size_average=True):

ADDING by R.Guo

# print(img1.shape) 
img1 = img1.view(1, 3, img1.shape[1], img1.shape[2]) 
img2 = img2.view(1, 3, img2.shape[1], img2.shape[2])
# print(img1.shape)

Pytorch version: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

This is my NVIDIA: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 457.85 Driver Version: 457.85 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 166... WDDM | 00000000:01:00.0 On | N/A | | 49% 61C P2 43W / 125W | 915MiB / 6144MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

aphixe commented 1 year ago

tried to add code, and its stuck at camera part. I don't know about pytorch version as I am just using recommended file.

manuelonthemic commented 11 months ago

can confirm, added the code suggested by 17373355 but getting this error running pytorch 1.9.0+cu111 and torchvision 1.10.0+cu111 as mentioned above

(gaussian_splatting) C:\Users\matdi\Pictures\blender\gaussian splatting\gaussian-splatting>python train.py -s data\bureau1\bureau1
Optimizing
Output folder: ./output/ca53e87a-e [21/11 10:30:17]
Tensorboard not available: not logging progress [21/11 10:30:17]
Reading camera 448/448 [21/11 10:30:19]
Loading Training Cameras [21/11 10:30:19]
[ INFO ] Encountered quite large input images (>1.6K pixels width), rescaling to 1.6K.
 If this is not desired, please explicitly specify '--resolution/-r' as 1 [21/11 10:30:19]
Traceback (most recent call last):
  File "train.py", line 219, in <module>
    training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
  File "train.py", line 35, in training
    scene = Scene(dataset, gaussians)
  File "C:\Users\matdi\Pictures\blender\gaussian splatting\gaussian-splatting\scene\__init__.py", line 73, in __init__
    self.train_cameras[resolution_scale] = cameraList_from_camInfos(scene_info.train_cameras, resolution_scale, args)
  File "C:\Users\matdi\Pictures\blender\gaussian splatting\gaussian-splatting\utils\camera_utils.py", line 58, in cameraList_from_camInfos
    camera_list.append(loadCam(args, id, c, resolution_scale))
  File "C:\Users\matdi\Pictures\blender\gaussian splatting\gaussian-splatting\utils\camera_utils.py", line 52, in loadCam
    image_name=cam_info.image_name, uid=id, data_device=args.data_device)
  File "C:\Users\matdi\Pictures\blender\gaussian splatting\gaussian-splatting\scene\cameras.py", line 57, in __init__
    self.camera_center = self.world_view_transform.inverse()[3, :3]
RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling `cusolverDnCreate(handle)`

this is my GPU

+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 546.12                 Driver Version: 546.12       CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                     TCC/WDDM  | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce RTX 4090      WDDM  | 00000000:26:00.0  On |                  Off |
|  0%   47C    P8              30W / 450W |   1489MiB / 24564MiB |     16%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
zero-joke commented 11 months ago

Edit: Just for reference. I was too excited then so I said "solved." The project can run smoothly in my environment, aka a ThinkStation. My GPU only supports very small datasets. I guess this might not solve the cases using RTX 40xx series...

My initial error report: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 1905, 1073] instead

Solution: Adding the following codes to loss_util.py def _ssim(img1, img2, window, window_size, channel, size_average=True): # ADDING by R.Guo # print(img1.shape) img1 = img1.view(1, 3, img1.shape[1], img1.shape[2]) img2 = img2.view(1, 3, img2.shape[1], img2.shape[2]) # print(img1.shape)

Pytorch version: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

This is my NVIDIA: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 457.85 Driver Version: 457.85 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 166... WDDM | 00000000:01:00.0 On | N/A | | 49% 61C P2 43W / 125W | 915MiB / 6144MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

it did work. Thanks

haofengsiji commented 9 months ago

Edit: Just for reference. I was too excited then so I said "solved." The project can run smoothly in my environment, aka a ThinkStation. My GPU only supports very small datasets. I guess this might not solve the cases using RTX 40xx series...

My initial error report: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 1905, 1073] instead

Solution: Adding the following codes to loss_util.py def _ssim(img1, img2, window, window_size, channel, size_average=True): # ADDING by R.Guo # print(img1.shape) img1 = img1.view(1, 3, img1.shape[1], img1.shape[2]) img2 = img2.view(1, 3, img2.shape[1], img2.shape[2]) # print(img1.shape)

Pytorch version: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

This is my NVIDIA: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 457.85 Driver Version: 457.85 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 166... WDDM | 00000000:01:00.0 On | N/A | | 49% 61C P2 43W / 125W | 915MiB / 6144MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

solvded, torch 1.10+cu11.1, so the reason was the absence of a batch dimension.

qianqjia commented 8 months ago

It did work. Thanks!

Edit: Just for reference. I was too excited then so I said "solved."编辑:仅供参考。我太兴奋了所以我说“解决了。“ The project can run smoothly in my environment, aka a ThinkStation. My GPU only supports very small datasets. I guess this might not solve the cases using RTX 40xx series... 该项目可以在我的环境中顺利运行,又名ThinkStation。我的GPU只支持非常小的数据集。我猜这可能无法解决使用RTX 40xx系列的情况.

My initial error report: 我的初始错误报告: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 1905, 1073] instead RuntimeError:应为4维权重[3,1,11,11]输入4维,但得到的却是大小为[3,1905,1073]的3维输入

Solution: Adding the following codes to loss_util.py解决方案:将以下代码添加到loss_util.py def _ssim(img1, img2, window, window_size, channel, size_average=True): def _ssim(img1,img2,window,window_size,channel,size_average=True): # ADDING by R.Guo 作者:R.Guo # print(img1.shape)  # print(img1.shape) img1 = img1.view(1, 3, img1.shape[1], img1.shape[2]) img1 = img1.view(1,3,img1.shape[1],img1.shape[2]) img2 = img2.view(1, 3, img2.shape[1], img2.shape[2]) img2 = img2.view(1,3,img2.shape[1],img2.shape[2]) # print(img1.shape)  # print(img1.shape)

Pytorch version: Pytorch版本: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

This is my NVIDIA: 这是我的NVIDIA: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 457.85 Driver Version: 457.85 CUDA Version: 11.1 | | NVIDIA SMI 457.85驱动程序版本:457.85 CUDA版本:11.1| |-------------------------------+----------------------+----------------------+ |———————————————————————————————+——————————————————————+——————————————————————+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | GPU名称TCC/WDDM|总线ID显示A|易失性不正确ECC| | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | 风扇温度性能功率:使用/上限|内存使用情况|GPU实用程序计算M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 166... WDDM | 00000000:01:00.0 On | N/A | | 0 GeForce GTX 166. WDDM| 00000000:01:00.0开启时|N/A| | 49% 61C P2 43W / 125W | 915MiB / 6144MiB | 37% Default | | 49% 61 C P2 43 W/125 W| 915 MiB/6144 MiB/6144 MiB系列|37%失败率| | | | N/A | | | | N/A| +-------------------------------+----------------------+----------------------+

Starak-x commented 7 months ago

I add these two lines of code, but raise another error Traceback (most recent call last): File "train_llff.py", line 407, in training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from, args.near) File "train_llff.py", line 174, in training loss.backward() File "/home/starak/anaconda3/envs/3dgs/lib/python3.7/site-packages/torch/_tensor.py", line 255, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/starak/anaconda3/envs/3dgs/lib/python3.7/site-packages/torch/autograd/init.py", line 149, in backward allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag RuntimeError: merge_sort: failed to synchronize: cudaErrorIllegalAddress: an illegal memory access was encountered

cuda11.1 pytorch1.9.1

BenjaminJaume commented 7 months ago

Edit: Just for reference. I was too excited then so I said "solved." The project can run smoothly in my environment, aka a ThinkStation. My GPU only supports very small datasets. I guess this might not solve the cases using RTX 40xx series...

My initial error report: RuntimeError: Expected 4-dimensional input for 4-dimensional weight [3, 1, 11, 11], but got 3-dimensional input of size [3, 1905, 1073] instead

Solution: Adding the following codes to loss_util.py def _ssim(img1, img2, window, window_size, channel, size_average=True): # ADDING by R.Guo # print(img1.shape) img1 = img1.view(1, 3, img1.shape[1], img1.shape[2]) img2 = img2.view(1, 3, img2.shape[1], img2.shape[2]) # print(img1.shape)

Pytorch version: pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 torchaudio==0.9.0 -f https://download.pytorch.org/whl/torch_stable.html

This is my NVIDIA: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 457.85 Driver Version: 457.85 CUDA Version: 11.1 | |-------------------------------+----------------------+----------------------+ | GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 GeForce GTX 166... WDDM | 00000000:01:00.0 On | N/A | | 49% 61C P2 43W / 125W | 915MiB / 6144MiB | 37% Default | | | | N/A | +-------------------------------+----------------------+----------------------+

This worked for me too, thank you.