Closed bao-O closed 4 years ago
It was running and output to console only options information and a line "Fitting trees" without no other new file. So I didn't know whether it was working or not.
Hi @DoriHp , the training process is a bit heavy, you need at least 16GB of RAM and a CPU with 4/8 threads. For example, training the full model (68 landmarks) on the iBug 300W dataset takes about 1 hour on an i7 CPU using 4 threads and 13 GB of RAM.
Notice that the training can only be done on CPU, the same is for inference.
@Luca96 Thanks for your fast response. My workstation's CPU is Intel® Xeon® Processor 2.20 GHz. RAM usage isn't a problem because it has 16gb . But with a weak CPU like that, I guess it will take a long time to complete training process. I will retry and look for the result.
Another question, I only want to improve the accuracy of the 5 points landmark detector that Dlib provide as default. Can I continue training that model or must train it from scratch? This is my parameters for the new model: `options.tree_depth = 4 options.nu = 0.1 options.cascade_depth = 15 options.feature_pool_size = 800 options.num_test_splits = 350 options.oversampling_amount = 10 options.oversampling_translation_jitter = 0
options.be_verbose = True # tells what is happening during the training options.num_threads = 2 `
Will it create a robust model? At least, I hope it will be more accurate than the default.
Well, with these parameters you should get a good model but I don't know how good compared to the default 5-landmark shape predictor.
To measure the accuracy of your model just use this function:
def measure_model_error(model, xml_annotations):
'''requires: the model and xml path.
It measures the error of the model on the given
xml file of annotations.'''
error = dlib.test_shape_predictor(xml_annotations, model)
print("Error of the model: {} is {}".format(model, error))
So that you can compare your results with the default model.
If you need more precision, you can try:
feature_pool_size = 1000
oversampling_amount = 20
(note: training time will increase)tree_depth = 5
Here my result when using measure error function: ~3.1 and 8.0. Is this a high accurate? I also want to compare with the default 5 points model, but I'm not sure about the index of all points it predicts according to this image. They are the 33, 36, 39, 42 and 45 point, aren't they?
I'll try your parameters later. The only thing I want to get is a robust model <3
Your results are pretty good.
However, it seems that your model overfits a bit: you can try to regularize it by setting the parameter nu
to 0.15
or 0.20
, and see if the gap between the two errors reduces.
If I remember well, the 5-landmark model detects the points: 37, 40 (left eye corners), 43, 46 (right eye corners), and 34 (nose tip).
Thank for your work. I'm going to try it, but I have some questions before starting the work. The first, how many time does it take to produce a new model? And is it faster if run on a machine that have a strong GPU or set the number of thread higher than 1?