The recognition is working fine in the same lighting conditions as that used during training images
Recognition is not working fine if the training lighting conditions is even slightly different from the lighting conditions of training images. ( Assuming all training images are taken at 1 location)
After going through the code I find that RGB values of each pixel of the cropped face is given as input to tensorflow for feature extraction, which explains why the recognition is highly dependent on training images.
Can I please know how the training and recognition be made independent of Lighting conditions.
I am thinking of
Calculate the distance between adjacent points and store in a 1d array of size 68
For eg
Arr[0]= distance between x0,y0 and x1,y1
Arr[1]= distance between x1,y1 and x2,y2
Arr[2]= distance between x2,y2 and x3,y3
.
.
.
Arr[67]= distance between x67,y67 and x68,y68
Pass the Array Arr as input to TensorFlow for feature extraction and generate 128 bit feature vector.
The above method would
Be independent of Light as its independent of RGB values of each pixel.
The time taken by tensorflow would reduce as the input to tensorflow for 128 bit feature extraction would be reduced from 160 160 3 to 1*68
Could you please share your thoughts on it OR is there any other way to make face recognition independent of RGB values of each pixel
@ashokbugude when i faced this issue, i varied the gamma values of the frame randomly while training to make it more robust to lightning. and it works fine for me.
I find the following things
After going through the code I find that RGB values of each pixel of the cropped face is given as input to tensorflow for feature extraction, which explains why the recognition is highly dependent on training images.
Can I please know how the training and recognition be made independent of Lighting conditions. I am thinking of
Src : https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python
https://hackernoon.com/building-a-facial-recognition-pipeline-with-deep-learning-in-tensorflow-66e7645015b8
Calculate the distance between adjacent points and store in a 1d array of size 68 For eg Arr[0]= distance between x0,y0 and x1,y1 Arr[1]= distance between x1,y1 and x2,y2 Arr[2]= distance between x2,y2 and x3,y3 . . . Arr[67]= distance between x67,y67 and x68,y68
Pass the Array Arr as input to TensorFlow for feature extraction and generate 128 bit feature vector.
The above method would
Could you please share your thoughts on it OR is there any other way to make face recognition independent of RGB values of each pixel