Open quangnguyenbn99 opened 6 years ago
I still have same issue ,Do you fix it? Please tell me! Thank you
Error arises as i in for loop goes beyond the size of cropped[i] Replace with below code . if(i>len(cropped)): print('Running') break else: cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :])
scaled.append(misc.imresize(cropped[i], (image_size, image_size), interp='bilinear'))
scaled[i] = cv2.resize(scaled[i], (input_image_size,input_image_size),interpolation=cv2.INTER_CUBIC)
scaled[i] = facenet.prewhiten(scaled[i])
scaled_reshape.append(scaled[i].reshape(-1,input_image_size,input_image_size,3))
feed_dict = {images_placeholder: scaled_reshape[i], phase_train_placeholder: False}
emb_array[0, :] = sess.run(embeddings, feed_dict=feed_dict)
predictions = model.predict_proba(emb_array)
print(predictions)
best_class_indices = np.argmax(predictions, axis=1)
print(best_class_indices)
best_class_probabilities = predictions[np.arange(len(best_class_indices)), best_class_indices]
print(best_class_probabilities)
cv2.rectangle(frame, (bb[i][0], bb[i][1]), (bb[i][2], bb[i][3]), (0, 255, 0), 2) #boxing face
The reason why this occurs is because of these lines:
_for i in range(nrof_faces):
emb_array = np.zeros((1, embedding_size))
...
# inner exception
if bb[i][0] <= 0 or bb[i][1] <= 0 or bb[i][2] >= len(frame[0]) or bb[i][3] >= len(frame):
continue
cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :])_
Because when bb is not satisfied, the procedure will continue, which means i will be increased by 1. However cropped is not appended. Then there'll be the chance that i is greater than the length of cropped.
What we can do, hence, is to decouple the i in for loop with the i in cropped[i], which is easy to implement. I just put a j before for loop and use cropped[j] with scaled[j] and _scaledreshaped[j] instead. Don't forget to plus 1 for j of course at the end of the for loop like this:
_if nrof_faces > 0:
......
j = 0
for i in range(nrof_faces):
emb_array = np.zeros((1, embedding_size))
......
if bb[i][0] <= 0 or bb[i][1] <= 0 or bb[i][2] >= len(frame[0]) or bb[i][3] >= len(frame):
continue
cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :])
**cropped[j] = facenet.flip(cropped[j], False)
scaled.append(misc.imresize(cropped[j], (image_size, image_size), interp='bilinear'))
scaled[j] = cv2.resize(scaled[j], (input_image_size, input_image_size),
interpolation=cv2.INTER_CUBIC)
scaled[j] = facenet.prewhiten(scaled[j])
scaled_reshape.append(scaled[j].reshape(-1, input_image_size, input_image_size, 3))
feed_dict = {images_placeholder: scaled_reshape[j], phase_train_placeholder: False}**
......
**j += 1**
else:
print('Unable to align')_
The reason why this occurs is because of these lines:
_for i in range(nrof_faces): emb_array = np.zeros((1, embedding_size)) ... # inner exception if bb[i][0] <= 0 or bb[i][1] <= 0 or bb[i][2] >= len(frame[0]) or bb[i][3] >= len(frame): continue cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :])_
Because when bb is not satisfied, the procedure will continue, which means i will be increased by 1. However cropped is not appended. Then there'll be the chance that i is greater than the length of cropped.
What we can do, hence, is to decouple the i in for loop with the i in cropped[i], which is easy to implement. I just put a j before for loop and use cropped[j] with scaled[j] and _scaledreshaped[j] instead. Don't forget to plus 1 for j of course at the end of the for loop like this: _if nrof_faces > 0: ...... j = 0 for i in range(nrof_faces): emb_array = np.zeros((1, embedding_size)) ...... # inner exception if bb[i][0] <= 0 or bb[i][1] <= 0 or bb[i][2] >= len(frame[0]) or bb[i][3] >= len(frame): continue cropped.append(frame[bb[i][1]:bb[i][3], bb[i][0]:bb[i][2], :]) cropped[j] = facenet.flip(cropped[j], False) scaled.append(misc.imresize(cropped[j], (image_size, image_size), interp='bilinear')) scaled[j] = cv2.resize(scaled[j], (input_image_size, input_image_size), interpolation=cv2.INTER_CUBIC) scaled[j] = facenet.prewhiten(scaled[j]) scaled_reshape.append(scaled[j].reshape(-1, input_image_size, input_image_size, 3)) feed_dict = {images_placeholder: scaled_reshape[j], phase_train_placeholder: False} ...... j += 1 else: print('Unable to align')_
i've done this and solved the problem, but now when there's more than one face the label is the same for everyone
Dear Ms/ Mrs
After fixing the bug as previous topic. I still have the same issue: cropped[i] = facenet.flip(cropped[i], False) IndexError: list index out of range
It happens when the number of detected face > recognized face.
Do you have any solutions for this case? Thank for you attention very much