mbzuai-metaverse / VOODOO3D-official

Official implementation for the paper "VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head Reenactment"
https://p0lyfish.github.io/voodoo3d
MIT License
120 stars 6 forks source link

What is the resolution compared to Thin-Plate-Spline-Motion-Model #1

Closed FurkanGozukara closed 6 months ago

FurkanGozukara commented 6 months ago

I think yours is same as Thin-Plate-Spline-Motion-Model

Their resolution was like 256px which was making it extremely useless

I wonder what is your resolution?

here a test i made with Thin Plate Spline

https://github.com/MBZUAI-Metaverse/VOODOO3D-official/assets/19240467/06144a0c-d064-492d-b799-0879d4d3793b

P0lyFish commented 6 months ago

Hi, thank you for your interest! To answer your question, our model produces 512px resolution images.

While both methods (Our method and the one you linked) are designed for the facial reenactment task, there are some key differences in terms of techniques and features:

Checkout our project page for more results: https://p0lyfish.github.io/voodoo3d/

[1] Drobyshev, Nikita, et al. "Megaportraits: One-shot megapixel neural head avatars." Proceedings of the 30th ACM International Conference on Multimedia. 2022.

FurkanGozukara commented 6 months ago

@P0lyFish thank you so much for reply

Will we able to use yours with pre trained model and code?

P0lyFish commented 6 months ago

The paper for this project is under submission. We will release all the code and pretrained models (including our reimplementation of MegaPortraits and Lp3D which are closed source) after the acceptance of the paper.

FurkanGozukara commented 6 months ago

The paper for this project is under submission. We will release all the code and pretrained models (including our reimplementation of MegaPortraits and Lp3D which are closed source) after the acceptance of the paper.

Sad. I am a Computer Engineer and my submissions were taking like a year to get accepted. 1 year means forever in AI

P0lyFish commented 6 months ago

Wouldn't take a whole year. I think the paper decision will be released at the end of January 2024. So the code would be available around that time too.

FurkanGozukara commented 6 months ago

Wouldn't take a whole year. I think the paper decision will be released at the end of January 2024. So the code would be available around that time too.

I hope so too be fast

FurkanGozukara commented 6 months ago

AliBaba released DreamTalk and it is really really low quality and they have forced watermark lol

https://github.com/MBZUAI-Metaverse/VOODOO3D-official/assets/19240467/786f6d66-8193-41b4-9681-0facea342140

JZArray commented 2 weeks ago

The paper for this project is under submission. We will release all the code and pretrained models (including our reimplementation of MegaPortraits and Lp3D which are closed source) after the acceptance of the paper.

@P0lyFish could you release codes for MegaPortraits?