Open MontaEllis opened 1 month ago
Hi, thanks for your message.
We didn't release the code to process ActorHQ "directly" as it is a bit out of this project's scope. But adapting their data to our method should be possible.
Maybe @WenbWa can share some tips on how to process their data and fit it into the pipeline.
Could you explain the process?
I want to know how to obtain the texture of mesh.
I want to know how to obtain the texture of mesh.
ActorHQ only released their 4D dataset with sequences of multi-view captured images, plus scan meshes without any texture images or vertex colors. In this case, you can treat their multi-view captured images as multi-view rendered images as described in our paper: 1) Apply SAM and Graphonomy on their multi-view captured images. 2) At the first/current frame, project the obtained multi-view labels to their scan meshes, and get the scan mesh labels. 3) Render the obtained scan mesh labels to multi-view labels, transfer them to the next frame using Optical Flow. 4) Repeat the step 2 and 3 for all of the following frames.
How to apply 4D parsing on ActorsHQ? Could you share the code? Thanks a lot!