Closed xuedue closed 1 year ago
Maybe the torch version mismatch is the problem. I checked the warping loss code with my conda env, and the weight and input dimension size is the same as the error. And not causing any errors.
Unsqueezing the input image to [1,3,512,512] may help, but I'm not sure. Please let me know if the unsqueezing does not help.
I downloaded the corresponding models and placed them in the corresponding folder. I also ran this code on RTX3090, but the following bug appeared without changing any code.
Loading ResNet ArcFace Setting up [LPIPS] perceptual loss: trunk [alex], v[0.1], spatial [off] Loading model from: /home/ubuntu/anaconda3/envs/latent3d/lib/python3.8/site-packages/lpips/weights/v0.1/alex.pth 0%| | 0/4 [00:00<?, ?it/s]Setting up PyTorch plugin "bias_act_plugin"... Done. /home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/projectors/w_projector.py:115: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). w_opt = torch.tensor(mean_w + start_w, dtype=torch.float32, device=device, /home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/projectors/w_projector.py:118: UserWarning: To copy construct from a tensor, it is recommended to use sourceTensor.clone().detach() or sourceTensor.clone().detach().requiresgrad(True), rather than torch.tensor(sourceTensor). translation_opt = torch.tensor(start_translation, dtype=torch.float32, device=device, Setting up PyTorch plugin "upfirdn2d_plugin"... Done. | 0/400 [00:00<?, ?it/s] 0%| | 0/400 [00:01<?, ?it/s] 0%| | 0/4 [00:10<?, ?it/s] Traceback (most recent call last): File "scripts/run_pti.py", line 60, in
run_PTI(run_name='', use_wandb=False, use_multi_id_training=False)
File "scripts/run_pti.py", line 54, in run_PTI
coach.train(P, E)
File "/home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/coaches/single_id_coach.py", line 50, in train
w_pivot, freezed_cam = self.calc_inversions(image, image_name, cam_encoder, e4e_encoder, folder_dir)
File "/home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/coaches/base_coach.py", line 86, in calc_inversions
ws, cam = w_projector.project(self.G, id_image, device=torch.device(global_config.device), w_avg_samples=5000,
File "/home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/projectors/w_projector.py", line 204, in project
warp_loss, test_img = calc_warping_loss(ws_clone, canonical_cam_clone, pred_ext, init_ext, intrinsic, pred_depths, target_images_contiguous, \
File "/home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/warping_loss.py", line 35, in calc_warping_loss
torch_target_features = get_features(target_images, torch_vgg, layers)
File "/home/ubuntu/Documents/ruihua/StyleGAN/Code/3DGAN-Inversion-main/./training/warping_loss.py", line 79, in get_features
x1 = layer_list0
File "/home/ubuntu/anaconda3/envs/latent3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/latent3d/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 446, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/home/ubuntu/anaconda3/envs/latent3d/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 442, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 3, 3, 3], but got 3-dimensional input of size [3, 512, 512] instead