Open adrida opened 4 months ago
Hi @adrida thanks for the interest. Currently, 4K4D focuses more on the novel view synthesis side and uses a different rendering pipeline than that of the traditional triangle meshes. Thus for creating 3D assets directly (with such a small view count), you should consider looking into other 3D/4D neural reconstruction methods that focus on surface quality and use human priors (with SDF field or directly optimizes meshes), like AniSDF or Relightable Avatar
I see, thanks a lot for your answer and for the references I will check them out. Any idea on projects that would export 3d animated assets of large scenes with multiple persons? I could have more video angles as input if needed and I don't necessarily need to be able to edit the 3d asset. I am looking to be able just "replay" the scene from different angles and navigate in the 3d space (a bit like static nerf approaches where you can export the 3d reconstruction of an appartement in VR and be in it).
Any idea on projects that would export 3d animated assets of large scenes with multiple persons?
Ah, for multi-person reconstruction, I recommend checking out CloseMocap and MultiNB (the first news of EasyMocap).
I could have more video angles as input if needed and I don't necessarily need to be able to edit the 3d asset. I am looking to be able just "replay" the scene from different angles and navigate in the 3d space (a bit like static nerf approaches where you can export the 3d reconstruction of an appartement in VR and be in it).
For replaying the reconstruction, there are generally two approaches:
I see, thanks a lot for sharing those projects I will take a look.
Any way I could help for the VR feature you are planning on adding to EasyVolcap? Your work is very inspiring, and I see a lot of great potential applications especially in AR/MR. If we could reconstruct in real time a dynamic scene and render it through a Meta Quest3 or an Apple Vision Pro, it would open-up limitless possibilities.
I am not an expert in 3d reconstruction but from the few papers/surveys I have been reading, I feel like the state of the art is very close to achieving such a thing.
Indeed, the field is advancing fast and I also think that future is not far away.
Any way I could help for the VR feature you are planning on adding to EasyVolcap?
I couldn't think of any specific problem to solve as of right now, but we are always looking forward to a PR of any kind. Feel free to contribute!
Hello, thank you for the great work. I was wondering if it was possible with the current implementation to pass as input 2/3 videos from a new scene and get a 3d animation file that could be rendered using Blender for example?