tijiang13 / InstantAvatar

InstantAvatar: Learning Avatars from Monocular Video in 60 Seconds (CVPR 2023)
376 stars 32 forks source link

AttributeError: 'SMPLOutput' object has no attribute 'A' #23

Closed reaper19991110 closed 1 year ago

reaper19991110 commented 1 year ago

Thanks for your amazing work!!

When I run the demo, AttributeError: 'SMPLOutput' object has no attribute 'A' happened.

I print SMPLOutput

A05FX~Q6 {TEHK@V2HF 11D

and I cannot output print statements in both the init and forward functions of class SMPL, so I cannot determine where the problem lies.

begin train
Global seed set to 42
Switch to /root/InstantAvatar-master/outputs/peoplesnapshot/demo/male-3-casual
[train] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_train.npz
[val] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_val.npz
[test] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_test.npz
model
[2023-05-30 19:09:31,591][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmpcxqy_y55
[2023-05-30 19:09:31,592][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmpcxqy_y55/_remote_module_non_sriptable.py
model2
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
Saving configs.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]

  | Name       | Type       | Params
------------------------------------------
0 | net_coarse | NeRFNGPNet | 13.0 M
1 | loss_fn    | NGPLoss    | 14.7 M
2 | evaluator  | Evaluator  | 2.5 M 
------------------------------------------
13.0 M    Trainable params
17.2 M    Non-trainable params
30.2 M    Total params
120.893   Total estimated model params size (MB)
Epoch 0:   0%|                                                                                                                                                                | 0/114 [00:00<?, ?it/s]Error executing job with overrides: ['dataset=peoplesnapshot/male-3-casual', 'experiment=demo']
Traceback (most recent call last):
  File "train.py", line 57, in <module>
    main()
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/main.py", line 48, in decorated_main
    _run_hydra(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 377, in _run_hydra
    run_and_report(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
    raise ex
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
    return func()
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 378, in <lambda>
    lambda: hydra.run(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 111, in run
    _ = ret.return_value
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
    raise self._return_value
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "train.py", line 51, in main
    trainer.fit(model)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 738, in fit
    self._call_and_handle_interrupt(
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 683, in _call_and_handle_interrupt
    return trainer_fn(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 773, in _fit_impl
    self._run(model, ckpt_path=ckpt_path)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1195, in _run
    self._dispatch()
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1275, in _dispatch
    self.training_type_plugin.start_training(self)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 202, in start_training
    self._results = trainer.run_stage()
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1285, in run_stage
    return self._run_train()
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/trainer/trainer.py", line 1315, in _run_train
    self.fit_loop.run()
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/fit_loop.py", line 234, in advance
    self.epoch_loop.run(data_fetcher)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/epoch/training_epoch_loop.py", line 193, in advance
    batch_output = self.batch_loop.run(batch, batch_idx)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/batch/training_batch_loop.py", line 90, in advance
    outputs = self.manual_loop.run(split_batch, batch_idx)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/base.py", line 145, in run
    self.advance(*args, **kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/loops/optimization/manual_loop.py", line 111, in advance
    training_step_output = self.trainer.accelerator.training_step(step_kwargs)
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/accelerators/accelerator.py", line 216, in training_step
    return self.training_type_plugin.training_step(*step_kwargs.values())
  File "/root/miniconda3/lib/python3.8/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 213, in training_step
    return self.model.training_step(*args, **kwargs)
  File "/root/InstantAvatar-master/instant_avatar/models/DNeRF.py", line 142, in training_step
    self.deformer.prepare_deformer(batch)
  File "/root/InstantAvatar-master/instant_avatar/deformers/snarf_deformer.py", line 101, in prepare_deformer
    self.initialize(smpl_params["betas"], smpl_params["betas"].device)
  File "/root/InstantAvatar-master/instant_avatar/deformers/snarf_deformer.py", line 77, in initialize
    self.tfs_inv_t = torch.inverse(smpl_outputs.A.float().detach())
AttributeError: 'SMPLOutput' object has no attribute 'A'
Exception ignored in: <function tqdm.__del__ at 0x7f2d35d8cf70>
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1152, in __del__
  File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1306, in close
  File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1499, in display
  File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1155, in __str__
  File "/root/miniconda3/lib/python3.8/site-packages/tqdm/std.py", line 1457, in format_dict
TypeError: cannot unpack non-iterable NoneType object
Global seed set to 42
Switch to /root/InstantAvatar-master/outputs/peoplesnapshot/demo/male-3-casual
[train] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_train.npz
[val] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_val.npz
[test] Loading from /root/InstantAvatar-master/data/PeopleSnapshot/male-3-casual/poses/anim_nerf_test.npz
__init__.py
[2023-05-30 19:09:46,239][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmp262kvcfz
[2023-05-30 19:09:46,239][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmp262kvcfz/_remote_module_non_sriptable.py
Error executing job with overrides: ['dataset=peoplesnapshot/male-3-casual', 'experiment=demo']
Traceback (most recent call last):
  File "animate.py", line 121, in <module>
    main()
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/main.py", line 48, in decorated_main
    _run_hydra(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 377, in _run_hydra
    run_and_report(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 214, in run_and_report
    raise ex
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 211, in run_and_report
    return func()
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/utils.py", line 378, in <lambda>
    lambda: hydra.run(
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/_internal/hydra.py", line 111, in run
    _ = ret.return_value
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/core/utils.py", line 233, in return_value
    raise self._return_value
  File "/root/miniconda3/lib/python3.8/site-packages/hydra/core/utils.py", line 160, in run_job
    ret.return_value = task_function(task_cfg)
  File "animate.py", line 93, in main
    print("Resume from", checkpoints[-1])
IndexError: list index out of range
tijiang13 commented 1 year ago

Hi,

In my code, I have made several modifications to the original SMPL implementation, as demonstrated in this link.

It appears that the code may not be utilizing the correct SMPL package, which could explain why no messages are being printed from the constructor and forward functions of SMPL. I recommend double-checking the import statements. It is possible that you have used import SMPL from smplx instead of import SMPL from .smpl somewhere in your code.

Best, Tianjian

reaper19991110 commented 1 year ago

你好

在我的代码中,我对原始 SMPL 实现进行了一些修改,如本链接所示。

代码似乎没有使用正确的 SMPL 包,这可以解释为什么没有从 SMPL 的构造函数和转发函数打印任何消息。我建议仔细检查导入语句。您可能已经使用了而不是代码中的某个位置。import SMPL from smplx``import SMPL from .smpl

最佳, 天健 Thank you very much for your response. I had previously made changes to the way I imported SMPL due to other errors, which caused this issue. Now, the demo runs successfully. By the way, the results are really fast.