Open TZYSJTU opened 1 day ago
我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图
我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图
which one, the former or the latter?
我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图
which one, the former or the latter?
The rasterization result is the former one, which is shown in the pipeline figure.
these are the most related papers - this one in particular https://arxiv.org/abs/2408.07481
[DeCo: Decoupled Human-Centered Diffusion Video Editing with Motion Consistency] [ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis] [One-Shot Learning Meets Depth Diffusion in Multi-Object Videos] [AMG: Avatar Motion Guided Video Generation] [Scene123: One Prompt to 3D Scene Generation via Video-Assisted and Consistency-Enhanced MAE]
I start this repo - it does have some rasterization code - it's using pytorch3d https://github.com/johndpope/MIMO-hack/blob/main/main.py
the other way is to use mitsuba3 - i started playing around with this the other day here https://github.com/johndpope/DiPIR-hack
this looks (almost) helpful for smpl-x stuff https://github.com/RammusLeo/DPMesh
UPDATE checkout this https://github.com/zshyang/amg
https://yukun-huang.github.io/DreamWaltz-G/ v1 renderer https://github.com/IDEA-Research/DreamWaltz/blob/main/core/nerf/renderer.py
我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图
yes, the pose representation is a interpolated feature map via the rasterization, visualized as the former one.
我理解这是一个rasterization的过程。至于骨骼点可以绑定smpl然后rasterize成这个特征图
yes, the pose representation is a interpolated feature map via the rasterization, visualized as the former one.
I have another question, which pretrained repose model do you use in your work?
what's the actual pose image F_t you render? Is the first colored image type or the skeleton-like type? 最终渲染出来的pose图是pipeline里这个彩色的,还是后文图里那个骨骼?This really makes me confused!