Open zhouhualei opened 8 months ago
Hi! It's a great ComfyUI Wrapper for Champ. But for the Semantic_map, we actually use a different ColorMap and different part segmentation. This "gap" may introduce severe artifacts for diffusion models. And I guess for this example, the normal map and pose predicted are severe, would you like to show them here?
@Leoooo333 ok, the followings are the motion sequences:
Hi there, thanks for your detailed description. It's a great ComfyUI Wrapper for Champ. But for the Semantic_map, we actually use a different ColorMap and different part segmentation. This "gap" may introduce severe artifacts for diffusion models. We are going to release our SMPL & Rendering code, which can solve this gap.
How did you achieve these condition maps? I think there may be a large domain gap between these maps with ours (which are achieved by SMPL).
@suqingkun 要不直接用中文沟通?我是根据你们的视频教程,跑出来的效果,所以不太清楚问题在哪里?https://www.youtube.com/watch?v=cbElsTBv2-A
我这边完整工作流截图是这样的,还需要我提供哪些信息吗?
@zhouhualei hello 你好,这个comfyui有在README上面提到是unofficial的,所以是存在一点问题的,主要是semantic map那一块的color map以及part segmentation不同。你的例子中的motion人物小,动作太快,我看到产出的几个condition中在时间上不连续,伪影比较严重,这些是受到normal,depth预测网络能力的约束,而不是Champ的问题。建议您使用我们新发布的SMPL & Rendering里的教程去准备数据。
@Leoooo333 预处理的代码和教程已经发布了吗?看你们roadmap里好像delay了
之前发布之后对国内配置环境不太友好,就revert了。现在的这个版本正在PR中,你也可以直接fork这个branch:https://github.com/Leoooo333/champ/tree/feature/data_processors
之前发布之后对国内配置环境不太友好,就revert了。现在的这个版本正在PR中,你也可以直接fork这个branch:https://github.com/Leoooo333/champ/tree/feature/data_processors
好的,我试试,感谢!
here is my test image and video:
https://github.com/fudan-generative-vision/champ/assets/968848/86637a99-5e30-45cc-9052-45f7971c53d0
and here is the result video:
https://github.com/fudan-generative-vision/champ/assets/968848/25df2f66-e1eb-4488-82cf-7c8258349f7a
So, what's the problem? is there any requirements for test data? Thanks