Open vishalathreya opened 8 years ago
Fine-tuning is pretty much the same as training from scratch: you prepare a LMDB. With a new dataset, you need to look into training/genJSON.m
and write your own code on it for the new dataset, to generate the a json file with exactly the same fields as generated from MPII, LSP, and FLIC datasets. And then the rest are the same for making the LMDB.
Hi @shihenw , I am also trying to finetune the last model you pushed that has the following deploy prototxt: pose_deploy_resize.prototxt.
The genProto.py doesn't seem to produce the same architecture as this model. Could you send me the solver and prototxts for training this last model?
Some of my training images don't have coordinates for all the 14 joints, I want the loss function to ignore these joints what should I do?
Thanks for sharing your work. Cheers!
Hi @shihenw , I followed your instruction (your comments on 21 Jul) and started to finetuned my model based on the provided LEEDS model (from get_model.sh). I used 1% of MPII data (300 images also) as training data to try to finetune my model. However, the performance of the learned model (after 20K iterations) is far worse than that of the original LEEDS model on the MPII validation set.
Your help is appreacitated! Many thanks!
@Frandre have you solved the training problems? I am also facing the same problem, the model is not converged.
Hi @shihenw , I have the PPSS pedestrian dataset in which the ground truth is a color label map for each body part. Assuming I extract the exact pixel coordinates of head, neck, etc, how do I finetune the MPI model on the above dataset? How does the CPM take its input data? How should I represent the ground truth of the pedestrian data so that I can train the CPM net on that data?
Thanks a lot!!!