MoyGcc / vid2avatar

Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition (CVPR2023)
https://moygcc.github.io/vid2avatar/
Other
1.2k stars 102 forks source link

test.py not running #42

Open japji313 opened 11 months ago

japji313 commented 11 months ago
i trained the model with pytorch-lightning==1.9.5
showing this error when i run test.py

test.py:10: UserWarning: 
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
  @hydra.main(config_path="confs", config_name="base")
/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/hydra.py:119: UserWarning: Future Hydra versions will no longer change working directory at job runtime by default.
See https://hydra.cc/docs/1.2/upgrades/1.1_to_1.2/changes_to_job_working_dir/ for more information.
  ret = run_job(
Global seed set to 42
Working dir: /home/prityush/Desktop/avatar/vid2avatar/outputs/Video/parkinglot
/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/connectors/accelerator_connector.py:478: LightningDeprecationWarning: Setting `Trainer(gpus=1)` is deprecated in v1.7 and will be removed in v2.0. Please use `Trainer(accelerator='gpu', devices=1)` instead.
  rank_zero_deprecation(
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
checkpoints/last.ckpt
Restoring states from the checkpoint path at checkpoints/last.ckpt
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
Loaded model weights from checkpoint at checkpoints/last.ckpt
Testing DataLoader 0:   0%|                                                                                                                                                                                | 0/42 [00:00<?, ?it/s]/home/prityush/Desktop/avatar/vid2avatar/code/v2a_model.py:217: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  num_splits = (total_pixels + pixel_per_batch -
/home/prityush/Desktop/avatar/vid2avatar/code/lib/utils/meshing.py:41: FutureWarning: marching_cubes_lewiner is deprecated in favor of marching_cubes. marching_cubes_lewiner will be removed in version 0.19
  verts, faces, normals, values = measure.marching_cubes_lewiner(
Error executing job with overrides: []
Traceback (most recent call last):
  File "test.py", line 43, in <module>
    main()
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/main.py", line 94, in decorated_main
    _run_hydra(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/utils.py", line 394, in _run_hydra
    _run_app(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/utils.py", line 457, in _run_app
    run_and_report(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/utils.py", line 223, in run_and_report
    raise ex
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/utils.py", line 220, in run_and_report
    return func()
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/utils.py", line 458, in <lambda>
    lambda: hydra.run(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 132, in run
    _ = ret.return_value
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/core/utils.py", line 260, in return_value
    raise self._return_value
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/hydra/core/utils.py", line 186, in run_job
    ret.return_value = task_function(task_cfg)
  File "test.py", line 39, in main
    trainer.test(model, testset, ckpt_path=checkpoint)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 794, in test
    return call._call_and_handle_interrupt(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/call.py", line 38, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in _test_impl
    results = self._run(model, ckpt_path=self.ckpt_path)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1112, in _run
    results = self._run_stage()
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1188, in _run_stage
    return self._run_evaluate()
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1228, in _run_evaluate
    eval_loop_results = self._evaluation_loop.run()
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/loops/dataloader/evaluation_loop.py", line 152, in advance
    dl_outputs = self.epoch_loop.run(self._data_fetcher, dl_max_batches, kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/loops/loop.py", line 199, in run
    self.advance(*args, **kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 137, in advance
    output = self._evaluation_step(**kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/evaluation_epoch_loop.py", line 234, in _evaluation_step
    output = self.trainer._call_strategy_hook(hook_name, *kwargs.values())
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1494, in _call_strategy_hook
    output = fn(*args, **kwargs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/pytorch_lightning/strategies/strategy.py", line 399, in test_step
    return self.model.test_step(*args, **kwargs)
  File "/home/prityush/Desktop/avatar/vid2avatar/code/v2a_model.py", line 268, in test_step
    model_outputs = self.model(batch_inputs)
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/prityush/Desktop/avatar/vid2avatar/code/lib/model/v2a.py", line 158, in forward
    fg_rgb_flat, others = self.get_rbg_value(points_flat, differentiable_points, view,
  File "/home/prityush/Desktop/avatar/vid2avatar/code/lib/model/v2a.py", line 237, in get_rbg_value
    _, gradients, feature_vectors = self.forward_gradient(x, pnts_c, cond, tfs, create_graph=is_training, retain_graph=is_training)
  File "/home/prityush/Desktop/avatar/vid2avatar/code/lib/model/v2a.py", line 258, in forward_gradient
    grad = torch.autograd.grad(
  File "/home/prityush/.pyenv/versions/vid_avatar/lib/python3.8/site-packages/torch/autograd/__init__.py", line 272, in grad
    return Variable._execution_engine.run_backward(
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
Testing DataLoader 0:   0%|          | 0/42 [00:07<?, ?it/s]                         
MoyGcc commented 11 months ago

This issue relates to the pytorch_lightning version where we use v1.5.7. I think as long as you strictly follow the environment requirements, there shouldn't be a problem with running the code (colab reference: https://github.com/camenduru/vid2avatar-colab). To properly install pytorch, I would suggest installing the pre-built version for example if you are using cuda 11.1: pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 -f https://download.pytorch.org/whl/torch_stable.html. The installation for the rest dependencies should then be fine.