YuliangXiu / ECON

[CVPR'23, Highlight] ECON: Explicit Clothed humans Optimized via Normal integration
https://xiuyuliang.cn/econ
Other
1.08k stars 106 forks source link

avatarizer.py has error when hps_type use pymafx will case "RuntimeError: einsum(): subscript l has size 250 for operand 1 which does not broadcast with previously seen size 20" #57

Closed xkkjiayou closed 1 year ago

xkkjiayou commented 1 year ago

hi,yuliang amazing job avatarizer.py has error when hps_type use pymafx will case "RuntimeError: einsum(): subscript l has size 250 for operand 1 which does not broadcast with previously seen size 20"


╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\anaconda3\lib\runpy.py:194 in _run_module_as_main                                             │
│                                                                                                  │
│   191 │   main_globals = sys.modules["__main__"].__dict__                                        │
│   192 │   if alter_argv:                                                                         │
│   193 │   │   sys.argv[0] = mod_spec.origin                                                      │
│ ❱ 194 │   return _run_code(code, main_globals, None,                                             │
│   195 │   │   │   │   │    "__main__", mod_spec)                                                 │
│   196                                                                                            │
│   197 def run_module(mod_name, init_globals=None,                                                │
│                                                                                                  │
│ D:\anaconda3\lib\runpy.py:87 in _run_code                                                        │
│                                                                                                  │
│    84 │   │   │   │   │      __loader__ = loader,                                                │
│    85 │   │   │   │   │      __package__ = pkg_name,                                             │
│    86 │   │   │   │   │      __spec__ = mod_spec)                                                │
│ ❱  87 │   exec(code, run_globals)                                                                │
│    88 │   return run_globals                                                                     │
│    89                                                                                            │
│    90 def _run_module_code(code, init_globals=None,                                              │
│                                                                                                  │
│ D:\xkk\human\ECON-master\ECON-master\apps\avatarizer.py:69 in <module>                           │
│                                                                                                  │
│    66 # obtain the pose params of T-pose, DA-pose, and the original pose                         │
│    67 for pose_type in ["a-pose", "t-pose", "da-pose", "pose"]:                                  │
│    68 │   smpl_out_lst.append(                                                                   │
│ ❱  69 │   │   smpl_model(                                                                        │
│    70 │   │   │   body_pose=smplx_param["body_pose"],                                            │
│    71 │   │   │   global_orient=smplx_param["global_orient"],                                    │
│    72 │   │   │   betas=smplx_param["betas"],                                                    │
│                                                                                                  │
│ D:\anaconda3\lib\site-packages\torch\nn\modules\module.py:1194 in _call_impl                     │
│                                                                                                  │
│   1191 │   │   # this function, and just call forward.                                           │
│   1192 │   │   if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks o  │
│   1193 │   │   │   │   or _global_forward_hooks or _global_forward_pre_hooks):                   │
│ ❱ 1194 │   │   │   return forward_call(*input, **kwargs)                                         │
│   1195 │   │   # Do not call functions when jit is used                                          │
│   1196 │   │   full_backward_hooks, non_full_backward_hooks = [], []                             │
│   1197 │   │   if self._backward_hooks or _global_backward_hooks:                                │
│                                                                                                  │
│ D:\xkk\human\ECON-master\ECON-master\lib\smplx\body_models.py:1316 in forward                    │
│                                                                                                  │
│   1313 │   │   shapedirs = torch.cat([self.shapedirs, self.expr_dirs], dim=-1)                   │
│   1314 │   │                                                                                     │
│   1315 │   │   if return_joint_transformation or return_vertex_transformation:                   │
│ ❱ 1316 │   │   │   vertices, joints, joint_transformation, vertex_transformation = lbs(          │
│   1317 │   │   │   │   shape_components,                                                         │
│   1318 │   │   │   │   full_pose,                                                                │
│   1319 │   │   │   │   self.v_template,                                                          │
│                                                                                                  │
│ D:\xkk\human\ECON-master\ECON-master\lib\smplx\lbs.py:194 in lbs                                 │
│                                                                                                  │
│   191 │   device, dtype = betas.device, betas.dtype                                              │
│   192 │                                                                                          │
│   193 │   # Add shape contribution                                                               │
│ ❱ 194 │   v_shaped = v_template + blend_shapes(betas, shapedirs)                                 │
│   195 │                                                                                          │
│   196 │   # Get the joints                                                                       │
│   197 │   # NxJx3 array                                                                          │
│                                                                                                  │
│ D:\xkk\human\ECON-master\ECON-master\lib\smplx\lbs.py:366 in blend_shapes                        │
│                                                                                                  │
│   363 │   # Displacement[b, m, k] = sum_{l} betas[b, l] * shape_disps[m, k, l]                   │
│   364 │   # i.e. Multiply each shape displacement by its corresponding beta and                  │
│   365 │   # then sum them.                                                                       │
│ ❱ 366 │   blend_shape = torch.einsum("bl,mkl->bmk", [betas, shape_disps])                        │
│   367 │   return blend_shape                                                                     │
│   368                                                                                            │
│   369                                                                                            │
│                                                                                                  │
│ D:\anaconda3\lib\site-packages\torch\functional.py:373 in einsum                                 │
│                                                                                                  │
│    370 │   │   _operands = operands[0]                                                           │
│    371 │   │   # recurse incase operands contains value that has torch function                  │
│    372 │   │   # in the original implementation this line is omitted                             │
│ ❱  373 │   │   return einsum(equation, *_operands)                                               │
│    374 │                                                                                         │
│    375 │   if len(operands) <= 2 or not opt_einsum.enabled:                                      │
│    376 │   │   # the path for contracting 0 or 1 time(s) is already optimized                    │
│                                                                                                  │
│ D:\anaconda3\lib\site-packages\torch\functional.py:378 in einsum                                 │
│                                                                                                  │
│    375 │   if len(operands) <= 2 or not opt_einsum.enabled:                                      │
│    376 │   │   # the path for contracting 0 or 1 time(s) is already optimized                    │
│    377 │   │   # or the user has disabled using opt_einsum                                       │
│ ❱  378 │   │   return _VF.einsum(equation, operands)  # type: ignore[attr-defined]               │
│    379 │                                                                                         │
│    380 │   path = None                                                                           │
│    381 │   if opt_einsum.is_available():                                                         │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯

RuntimeError: einsum(): subscript l has size 250 for operand 1 which does not broadcast with previously seen size 20

YuliangXiu commented 1 year ago

Please check the updated apps/avatarizer.py

BTW, the PyMAF-X does not work well on my machine anymore, causing serious misalignment. I am still debuging to locate the problem. Does PyMAF-X work for you? If not, just use PIXIE instead.