Closed severin-lemaignan closed 9 years ago
I don't want to put the file into the repo or dlib archive file since it's so big. But clarifying the license is a good idea. I'll put the file into a github repo with an appropriate license statement later today.
There is now a repo with the model in it and an appropriate license: https://github.com/davisking/dlib-models
Great, thank you!
No problem.
Is there a smaller model? Suitable for use on mobile devices?
No. If you want a smaller model you will have to train your own.
What would be really useful in this repo is to add code and scripts to reproduce that model file. Hopefully over time people would contribute more complex models, enhance it etc...
Reg smaller model for mobile use case:
Is it possible to extract data from "shape_predictor_68_face_landmarks.dat" file (of size 95MB) and make smaller version of dat file (say 50MB) ?, little loss in landmark accuracy is acceptable.
This will help to avoid training by our own.
No. Just train a new model. The data is available and it doesn't take that long.
We would like to reduce the dat file size by retaining the the No of landmarks to 68, we are ready to compromise in landmark detect accuracy.
I experimented by converting dat file's deserialized data (forests, anchor_idx & deltas) from 32 bits into 16 bits and stored it as smaller dat (serialized code also modified), but unable to succeed.
So retraining new model with 68 points, will it result the into 95MB of dat file again? Or any change needed/recommended on shape_predictor_trainer parameters to reduce this dat size (68 points case)?
Also please provide me the links to get 68 points training data like
Thanks
Data is available here: http://dlib.net/files/data/. Just remove landmarks you don't want and the resulting file will be smaller. Experiment with training parameters and see what you get and you like it for your application.
Yes I get it. You can modify it however you like. The only way to see if the accuracy is still good enough for what you care about is to try it and see.
Thanks for the reply & link :)
I trained with your ibug_300W_large_face_landmark_dataset files and got the dat file of size 63.3MB (45.5MB after zip) This size will fit into mobile app store, thanks a lot :)
Note: I commented out the below lines at train_shape_predictor_ex.cpp. trainer.set_oversampling_amount(300); trainer.set_nu(0.05); trainer.set_tree_depth(2);
@gopi77 and what loss of precision do you get?
I didn't measure the loss, visually the new 68 landmarks appears to be proper and acceptable.
@gopi77 i also need the minified .dat file for android app, but i have problem training it by my self (hardware issue). Can you please share your .dat file to us? Thanks
Hi
Please find the link for dat file. https://www.dropbox.com/s/ycbpznbjp165kae/sp.dat?dl=0
Regards Gopi. J
Thanks for the link @gopi77 , you're the best!
Does somebody have a model he could share with less than 68 landmarks? Or a code to train such model? I'm trying to generate a model that will be less than 50mb
There is an example program that shows how to train it: http://dlib.net/train_shape_predictor_ex.cpp.html
@davisking I'm also interested in training say 20 points model. How do I achieve that?
Do I have to manually (with some script) adjust iBUG dataset? what are the general advices to make the model smaller? We are ready to sacrifice some precision but we would like to minimize the losses.
I would appreciate any hints.
All you need to do is run the training code on training data with just 20 points in it.
As for the other things, either read the paper describing the algorithm to understand how it works or just run it and see what happens with different settings. I'm not going to explain how it works here when there is a very well written paper that explains it in detail.
@davisking yeah, I understand. Is it possible to tell which exactly 20 points we want?
Is there any info about quantizing .dat
file to 8/16 bit float
s?
Thanks for your cooperation
You want the points you want. How am I to know what those points are? I'm not a mind reader.
Anyway, read the paper.
I understand the people wanting the trained file. Cause the same happened to me. I don't have cpp compiling experience and don't know how to train it with less points. I have also tried the quantization but there's very poor info about it :c Hope you understand
@davisking may I know any hardware minimum requirement to train 68 landmarks with "https://ibug.doc.ic.ac.uk/resources/facial-point-annotations/" this data? I am trying to run on my laptop but it seems stop working in the after 2 hours.. the code I'm using is exactly the same with "http://dlib.net/train_shape_predictor_ex.cpp.html", I only change the xml name file to match with their data.
No fancy hardware is required. Any normal laptop or desktop should suffice. http://dlib.net/faq.html#Whyisdlibslow
Can we detect the 68 feature points data to a good facial model and let him move together?
Hi friends, I trained dlib by myself with the data got: http://dlib.net/files/data/ and I set parameter: trainer.set_oversampling_amount(300); trainer.set_nu(0.05); trainer.set_tree_depth(2); But I got accuracy ~14% on training and model size: ~17MB. Please let me know why? Note: I try to modify some parameter like: set_nu(0.01), tree_depth(5), cascade,.. but I still got the result with low accuracy. Thanks!
Since a few months,
dlib
comes with a state-of-art face tracker, and provides an use example that is impressive.This example requires a trained model, and suggests to download it from here: http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2. The model is however not part of
dlib
source, and no information regarding its license is provided.Since this 68-features face tracker with this model is by itself extremely valuable, I would suggest to have it directly bundled with
dlib
(like OpenCV which provides trained haar classifiers for face detection).If the size of the file is a concern, it could possibly belong to a different repo (
dlib-models
ordlib-data
for instance).In any case, it would be nice to know about the license of the model (at first sight,
CC-BY
would be a possible match to the apache 2 license used fordlib
source).