Open SaiThejeshwar opened 3 years ago
Checkpoint and config should have the same number of segments (keypoints) in your case it is 10 and 15 respectively. Either use different checkpoint or different config.
Thank you for the reply.
Yes. I have already tried different number of segments, and it worked for 10 segments and checkpoint is saved in log folder. But, When trying to infer, with the new fine tuned checkpoint model weights, I get the missing key error in the checkpoint - "blend_downsample.weight" (in Image). Please guide me.
The above is thrown, when I exceute this block during Inference.
reconstruction_module, segmentation_module = load_checkpoints(config='config/Retrain_10segments.yaml', checkpoint='log/Retrain_10segments 08-03-21 03:38:02/00000009-checkpoint.pth.tar', blend_scale=0.125, first_order_motion_model=False)
The configuration file, I have adapted is from the "vox-256-sem-10segments.yaml" with slight modifications .
Below is my config file. (Uploading as txt file)
Thank you
You are using --supervised
flag? It is not clear to me, what you are trying to achieve. If the goal is to use model in supervised mode you should finetune fomm.
You are using
--supervised
flag? It is not clear to me, what you are trying to achieve. If the goal is to use model in supervised mode you should finetune fomm.
My goal is to swap faces in the video. So, which one should I fine tune. There isn't any --supervised flag in config file. Where are you referring to.
Do you mean, I need to fine tune fomm?
To be more specific, I am aiming at this - https://github.com/AliaksandrSiarohin/first-order-model#face-swap
Yes you need to finetune fomm than.
Hi @AliaksandrSiarohin thanks for amazing work!
I was trying to fine-tune motion-cosegmentation model with my dataset but same problem occurred.
So at first I tried to fine-tune fomm model with 10 segments config file, the training went without problems, but when I load the models it gives me
Then I tried to fine-tune motion-copart's checkpoint with 15 segments. The training went without problems, but when trying to test it I got the same error as above.
My goal is to fine-tune motion-co parts checkpoint with my dataset and then run it with supervised option, without fomm and using face parsing. What is missing?
So quick update on my problem
This error occurs when I run with --supervised flag. But when I run your pertained model vox-15segments.pth.tar it works.
But when I fine-tune that model or train it from scratch with vox-256-sem-15segments.yaml config it only works without --supervised flag.
So my question is why your checkpoint works with supervised flag but mine not?
Supervised uses original fomm, you should not fine-tune.
But I can use supervised with your vox-15segments.pth.tar and I guess it's motion-co part model not fomm. So you mean I should fine-tune fomm to use it with supervised flag in motion-co part?
Hi @AliaksandrSiarohin ,
This is very interesting work.
I wanted to fine tune the model with few more new videos. Though I could do the video-processing part, I am unable to run the training code. I am getting the below error. How should i fine tune the model. I have followed the steps in the repository. Please help me out!
!python train.py --config "config/Retrain_15segments.yaml" --checkpoint "models/vox-cpk.pth.tar"