Open Thiswang opened 6 years ago
You can retrain the model in theory, but it's very difficult since you need so much training data. The original model was trained on ~3 million faces and you'd need around that much data to re-train it. I'm trying to figure out a way to allow people to submit more data and provide re-trained models for everyone, but it's tricky.
The project is excellent! It is amazing and easy to use, but sometimes the code get the wrong result. There is A in the picture, but the picture in the window shows B's name. I am a Chinese,I am curios about how many yellow faces you trained totally, dose it works in Asian faces.
My English is not so good, I don't know if you understand what I am saying, thank you so much.
I would love to build up a bigger training set that has a better mix of people from different parts of the world. But it is hard to find datasets with millions of pictures of people from different countries sorted by identity.
If anyone has access to that kind of data, please email me :)
recently i found a big public dataset . you can take a look at it and wish it could help :-) here is the url of Baidu SkyDrive : https://pan.baidu.com/s/1qZ6kM0O the dataset is quite large
please find above link https://www.kairos.com/blog/60-facial-recognition-databases which has access to more than 60 datasets ranging from 2D and 3D datasets.
Hopefully, it may help :)
How about using transfer learning? I guess the reason that people want to retrain the model is low accuracy due to different purpose of usage like specific racial. So how about using smaller and specific dataset to do transfer learning?
Though I am quite sure that whether dlib model can do transfer leaning.
If there’s any other idea, please share with us.
Thanks 🙏
+1
@cftang0827 can we do transfer learning? is there any update on this topic?
I had done this before, I use my dataset and use face_recognition api to get the face vector. And then use the vector and center loss to train another model. However, I got a bad result, I think it maybe resulted from bad dataset but I am not so sure.
I had done this before, I use my dataset and use face_recognition api to get the face vector. And then use the vector and center loss to train another model. However, I got a bad result, I think it maybe resulted from bad dataset but I am not so sure.
What dataset did you use?
I am using your code face_distance.py, The project is excellent! But recently I found that some images compared from different person could get a very low "distance" score . so I thought if I can retrain this model . But I did not find the way to do it. Can you give me some informations about that?