Open Nourhan27 opened 5 years ago
Hi,
Apologies for the delayed response.
You're right that fore every landmark, scale and view we need to train each landmark (this can be reduced a bit by using mirrored views, e.g. for left profile and right profile the patch experts can be mirrored and don't need to be retrained).
To generate models for C++ detection, first need to convert the keras models to matlab ones using keras2matlab.py
, followed by instructions in matlab_version\models\cen\readme.txt
.
Hello,
For CEN patch detector training we have to train a model for every landmark , scale , view, so if i use 2 views and 4 scales, we will have 8 combinations for training, right ??
1- Should we do all training combinations ? 2- How do we integrate the generated models of different landmarks to generate these final models ready to use in C++ detection model ---->> cen_patches_0.25_of.dat cen_patches_0.35_of.dat cen_patches_0.50_of.dat cen_patches_1.00_of.dat
Thanks,