ayushtewari / DFM

Implementation of "Diffusion with Forward Models: Solving Stochastic Inverse Problems Without Direct Supervision"
https://diffusion-with-forward-models.github.io/
153 stars 18 forks source link

Empty sampled points #2

Closed fzy139 closed 1 year ago

fzy139 commented 1 year ago

Hi,

Thanks for your excellent work. I tried to run the inference code on co3d hydrant. However, the following issue occured:

model dit
NOT LOADING DIT WEIGHTS
feats_cond True
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
/home/user/miniconda3/envs/nr/lib/python3.9/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  warnings.warn(
/home/user/miniconda3/envs/nr/lib/python3.9/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=VGG16_Weights.IMAGENET1K_V1`. You can also use `weights=VGG16_Weights.DEFAULT` to get the most up-to-date weights.
  warnings.warn(msg)
Loading model from: /home/user/miniconda3/envs/nr/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
batch size: 1
checkpoint path: files/co3d_model.pt
step optimizer not found
run dir: /data1/user/codes/DFM/wandb/run-20231019_184837-vnnpa2on/files
wandb: WARNING Symlinked 0 file into the W&B run directory, call wandb.save again to sync new files.
wandb: WARNING Symlinked 0 file into the W&B run directory, call wandb.save again to sync new files.
Setting up [LPIPS] perceptual loss: trunk [vgg], v[0.1], spatial [off]
Loading model from: /home/user/miniconda3/envs/nr/lib/python3.9/site-packages/lpips/weights/v0.1/vgg.pth
video_idx: 0, len: 1
Starting sample 0
Error executing job with overrides: ['dataset=CO3D', 'name=co3d_oneshot_debug_new_branch', 'ngpus=1', 'feats_cond=True', 'wandb=online', 'checkpoint_path=files/co3d_model.pt', 'use_abs_pose=True', 'sampling_type=oneshot', 'use_dataset_pose=True', 'image_size=128']
Traceback (most recent call last):
  File "/data1/user/codes/DFM/experiment_scripts/co3d_results.py", line 414, in train
    out = trainer.ema.ema_model.sample(batch_size=1, inp=inp)
  File "/home/user/miniconda3/envs/nr/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data1/user/codes/DFM/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py", line 576, in sample
    return sample_fn(
  File "/home/user/miniconda3/envs/nr/lib/python3.9/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/data1/user/codes/DFM/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py", line 468, in ddim_sample
    ctxt_rgbd, trgt_rgbd, ctxt_feats = self.model.render_ctxt_from_trgt_cam(
  File "/data1/user/codes/DFM/PixelNeRF/pixelnerf_model_cond.py", line 259, in render_ctxt_from_trgt_cam
    rgb, depth, rendered_feats = self.render_full_in_patches(
  File "/data1/user/codes/DFM/PixelNeRF/pixelnerf_model_cond.py", line 187, in render_full_in_patches
    rgb, depth, misc = self.renderer_coarse(
  File "/home/user/miniconda3/envs/nr/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data1/user/codes/DFM/PixelNeRF/renderer.py", line 301, in forward
    sigma = sigma.view(batch_size, num_rays, self.n_samples, 1)
RuntimeError: shape '[1, 1024, 64, 1]' is invalid for input of size 0

May be something wrong with the preprocessed data?