Open lymanblue opened 5 years ago
@lymanblue All the training and validation/test data have been aligned by MTCNN to the size of 112*112
Thanks.
And there is no file like ms1m.py in dataset directory. Would you provide the file in the future? Or is the data loader of ms1m the same with other dataset loader?
Does the training results of the cleaned-MS1M is the same with the MS1M-V2 provided from InsightFace?
Thank you.
@lymanblue The cleaned-MS1M I used is provided by DeepGlint, it only has 3.9M images, while InsightFace has a 5.8M cleaned version.
The dataloader of MS1M is as same as the CASIA-WebFace.
Therefore, we have to preprocess (e.g., face alignment) the cleaned-MS1M of DeepGlint by ourself with the aid of the msra_lmk file.
On the other hand, if we use the MS1M-V2 (already aligned?) from InsightFace. Can we use the CASIA-WebFace loader directly for training?
Thank you.
Is the MS1M-IBUG from InsightFace the cropped and aligned result of the cleaned-MS1M?
You can use your own data (MS1M-V2) to train the models directly.
For training the model directly from MS1M-V2 from InsightFace.
Do you mean the following steps? (e.g., LFW for validation)
Thank you.
No, train.rec and train.idx are the mxnet's data format, the dataloader I provided is suitable for images,just like this:
00000
--- 00000-00001.jpg
--- 00000-00002.jpg
--- 00000-00003.jpg
00001
--- 00001-00001.jpg
---00001-00002.jpg
---00001-00003.jpg
Thank you~!
Could we use the the prepare_data.py from https://github.com/TreB1eN/InsightFace_Pytorch to convert the mxnet's format to the specified format? The data format looks similar. (if identical would be better).
Well, you can use it to parse the train.rec file to get original images.
请问您使用预训练模型了吗? Have you used the pre training model?
Hi~
Thank you for your great work.
Does the reported accuracy result on validation data (e.g. LFW, MegaFace) apply face alignment process (e.g., MTCNN)?
Thank you.