Markfryazino / wav2lip-hq

Extension of Wav2Lip repository for processing high-quality videos.
534 stars 236 forks source link

Training without pretrained state #17

Open nirvitarka opened 2 years ago

nirvitarka commented 2 years ago

I removed the line resume_state: checkpoints/pretrained.state from train_basicsr.yml

Then got the error about 128 x 128 dimensions. So I resized "hq" images to 128x128 and "lq" images to 32x32.

Now I am getting error LQ (32, 32) is smaller than patch size (96, 96) in /basicsr/data/transforms.py", line 59, in paired_random_crop

Where exactly need to set this patch size? Has anyone got the training working without the pretrained state?

shehrum commented 2 years ago

Hi, did you manage to solve this?

nirvitarka commented 2 years ago

No, I could not make it work, have not tried anything else further for this since then.