NVlabs / few-shot-vid2vid

Pytorch implementation for few-shot photorealistic video-to-video translation.
Other
1.79k stars 276 forks source link

When will the code be released? #1

Closed MengXinChengXuYuan closed 4 years ago

MengXinChengXuYuan commented 4 years ago

Great work!!!! Any timeline to release the code?

zrhyst23 commented 4 years ago

Great work!!!! Any timeline to release the code?

same with you,i am looking forward to this code.

tcwang0509 commented 4 years ago

The code is ready for release, but we're still waiting for lawyers to resolve some legal issues. Once it's approved we can release it.

jancen0 commented 4 years ago

Are you planning on releasing a pre-trained model?

jjandnn commented 4 years ago

无比期待。Very much looking forward to……!

charlanalves commented 4 years ago

I'm looking forward to!!!!

mathfinder commented 4 years ago

I'm looking forward to!!!!My baba.

studabyd commented 4 years ago

question first, what's the mini-size of GPU does this work need?

MengXinChengXuYuan commented 4 years ago

Hi I'm currently trying to re-implement a simplfied version of your work, used to generate new view of static faces, like discribed in paper Few-Shot Adversarial Learning of Realistic Neural Talking Head Models

Instead of using adaIN as mentioned in the aboved paper, I tried to use origin spade first (I planned to update it as discribed in your paper with learnable weights mlp_shared/mlp_gamma/mlp_beta if the result of origin spade is good), however the result for unseen data couldn't be worse.....

My question is that did you use fully origin official implementation of spade or you modified it? Especially what type of norm did you use in the generator? synbatchnorm as default, batchnorm or instance norm?

AsifBh commented 4 years ago

Do you have any estimate timeline for releasing code.

tcwang0509 commented 4 years ago

The code is now released.