Hi, thanks for sharing this incredible work with the community. It helps in collaborating and advancing the lip-sync technology.
It would be great if you could share the pre-trained checkpoint of the discriminator model used in the Video Renderer module. This will help in running fine-tuning experiments.
@Weizhi-Zhong
Hi, thanks for sharing this incredible work with the community. It helps in collaborating and advancing the lip-sync technology.
It would be great if you could share the pre-trained checkpoint of the discriminator model used in the Video Renderer module. This will help in running fine-tuning experiments.
Thanks.