ageitgey / face_recognition

The world's simplest facial recognition api for Python and the command line
MIT License
53.19k stars 13.47k forks source link

IndexError: list index out of range #178

Closed wanesta closed 7 years ago

wanesta commented 7 years ago

IndexError: list index out of range

my code:

import face_recognition known_image = face_recognition.load_image_file("D:/1.jpg") unknown_image = face_recognition.load_image_file("D:/2.jpg") biden_encoding = face_recognition.face_encodings(known_image)[0]

wanesta commented 7 years ago

I'm run code with windows

ageitgey commented 7 years ago

If no face is found in the image, the encodings array will be empty. So check the length of the array before you try to access the first element (i.e. element [0]) of the array.

For example:


encodings = face_recognition.face_encodings(known_image)
if len(encodings) > 0:
    biden_encoding = encodings[0]
else:
   print("No faces found in the image!")
   quit()
``
wanesta commented 7 years ago

My image is my dog,I want to recognition dog face. How to do?

wanesta commented 7 years ago

thak you very much.

spad commented 6 years ago

Hi ageitgey, sometimes it happens that face_locations finds faces, but face_encodings not! How it's possibile? I attached the cropped face from rect returned by face_locations. The same image then fails in face_encodings. I'm using a client/server structure: the client looks for faces in camera frames, crops images and send them to the server. Then the server accomplishes face recognition encodings face pictures.

So, the face_encodings functions receives just the cropped face picture and not the entire frame.

I tried both hog and cnn model in face_locations with the same result.

p3ab5zis

ageitgey commented 6 years ago

@spad if you are only sending the cropped area of the face, you need to tell it that the whole image contains the face. If you don't, it will search for the face within the image and won't know the whole image itself is the face. If it can't find a face inside the image, it won't be able to do anything.

So just tell it that the whole image is the face:

import face_recognition

img = face_recognition.load_image_file("your_cropped_image")

# Assume the whole image is the location of the face
height, width, _ = img.shape
# location is in css order - top, right, bottom, left
face_location = (0, width, height, 0)

encodings = face_recognition.face_encodings(img, known_face_locations=[face_location])

Note: I didn't really test this code, so just double check that the results it gives you are correct.

spad commented 6 years ago

@ageitgey I solved now just oversizing the cropped face before face_encodings. But I'm interested to you solutions, I'will try. Thanks.

ageitgey commented 5 years ago

@SaraHatem check this out - https://github.com/ageitgey/face_recognition/wiki/Common-Errors#list-index-out-of-range-errors

BTW this library won't detect cat faces, only human faces.

SaraHatem commented 5 years ago

@ageitgey ok thanks alot, but do u have any recommendations about an animal or cat and dog facial recognition library or software i can use in my project, i've been searching alot but nothing works. thanks alot inadvance

ageitgey commented 5 years ago

Unfortunately I don't, sorry!

HAKANMAZI commented 4 years ago

@spad if you are only sending the cropped area of the face, you need to tell it that the whole image contains the face. If you don't, it will search for the face within the image and won't know the whole image itself is the face. If it can't find a face inside the image, it won't be able to do anything.

So just tell it that the whole image is the face:

import face_recognition

img = face_recognition.load_image_file("your_cropped_image")

# Assume the whole image is the location of the face
height, width, _ = img.shape
# location is in css order - top, right, bottom, left
face_location = (0, width, height, 0)

encodings = face_recognition.face_encodings(img, known_face_locations=[face_location])

Note: I didn't really test this code, so just double check that the results it gives you are correct.

I tested it works, Thanks

Rajatkul1998 commented 4 years ago

just remove the 0 in biden_encoding = face_recognition.face_encodings(known_image)[0] if you are passing more than 1 image

panchami88 commented 4 years ago

BUT IF WE RMOVE 0, ITS GIVING ERROR LIKE THIS ValueError: operands could not be broadcast together with shapes (1,2) (128,)

amanmishra1321 commented 4 years ago

Sir, I m get error list Index out of range, whenever someone is capturing the image and after passing it to that api. def facedect(loc): #Here loc is location of my captured image cam = cv2.VideoCapture(0) s, img = cam.read() if s: face_1_image = face_recognition.load_image_file(loc) face_1_face_encoding = face_recognition.face_encodings(face_1_image)[0]

    # face_1_face_encoding=face_1_face_encoding[0]

    #

    small_frame = cv2.resize(img, (0, 0), fx=0.25, fy=0.25)

    rgb_small_frame = small_frame[:, :, ::-1]

    face_locations = face_recognition.face_locations(rgb_small_frame)
    face_encodings = face_recognition.face_encodings(rgb_small_frame, face_locations)
    if len(face_encodings)>0:
        face_encodings = face_encodings[0]
        check = face_recognition.compare_faces([face_1_face_encoding], face_encodings)
        print(check)
        if check[0]:
            return True
        else:
            return False
    else:
        print("No face Found")
        quit()

and the code i m using for camera

def camera1():

Camera 0 is the integrated web cam on my netbook

camera_port = 0

# Number of frames to throw away while the camera adjusts to light levels
ramp_frames = 30
# global Name
# Fetching the name of the person
Dir = 'E:\Practice\Django\Facebook2\Camera\image.jpg'

# Now we can initialize the camera capture object with the cv2.VideoCapture class.
# All it needs is the index to a camera port.
camera = cv2.VideoCapture(camera_port)

# Captures a single image from the camera and returns it in PIL format
def get_image():
    # read is the easiest way to get a full image out of a VideoCapture object.
    retval, im = camera.read()
    return im

# Ramp the camera - these frames will be discarded and are only used to allow v4l2
# to adjust light levels, if necessary
for i in range(ramp_frames):
    temp = get_image()
print("Taking image...")
# Take the actual image we want to keep
camera_capture = get_image()
file = Dir
# A nice feature of the imwrite method is that it will automatically choose the
# correct format based on the file extension you provide. Convenient!
cv2.imwrite(file, camera_capture)
# cv2.imwrite(UserProfiles.objects.create(extra_ident=id),camera_capture)

# You'll want to release the camera, otherwise you won't be able to create a new
# capture object until your script exits
del camera
return file
Tareqtaleb45 commented 4 years ago

IndexError: liste des index hors limites mon code est : import cv2 import dlib import PIL.Image import numpy as np from imutils import face_utils import argparse from pathlib import Path import os import ntpath

print('[INFO] Starting System...') print('[INFO] Importing pretrained model..') pose_predictor_68_point = dlib.shape_predictor("pretrained_model/shape_predictor_68_face_landmarks.dat") pose_predictor_5_point = dlib.shape_predictor("pretrained_model/shape_predictor_5_face_landmarks.dat") face_encoder = dlib.face_recognition_model_v1("pretrained_model/dlib_face_recognition_resnet_model_v1.dat") face_detector = dlib.get_frontal_face_detector() print('[INFO] Importing pretrained model..') def transform(image, face_locations): coord_faces = [] for face in face_locations: rect = face.top(), face.right(), face.bottom(), face.left() coord_face = max(rect[0], 0), min(rect[1], image.shape[1]), min(rect[2], image.shape[0]), max(rect[3], 0) coord_faces.append(coord_face) return coord_faces def encode_face(image): face_locations = face_detector(image, 1) face_encodings_list = [] landmarks_list = [] for face_location in face_locations:

DETECT FACES

    shape = pose_predictor_68_point(image, face_location)
    face_encodings_list.append(np.array(face_encoder.compute_face_descriptor(image, shape, num_jitters=1)))
    shape = face_utils.shape_to_np(shape)
    landmarks_list.append(shape)
face_locations = transform(image, face_locations)
return face_encodings_list, face_locations, landmarks_list

def easy_face_reco(frame, known_face_encodings, known_face_names): rgb_small_frame = frame[:, :, ::-1]

ENCODING FACE

face_encodings_list, face_locations_list, landmarks_list = encode_face(rgb_small_frame)
face_names = []
for face_encoding in face_encodings_list:
    if len(face_encoding) == 0:
        return np.empty((0))
    # CHECK DISTANCE BETWEEN KNOWN FACES AND FACES DETECTED
    vectors = np.linalg.norm(known_face_encodings - face_encoding, axis=1)
    tolerance = 0.6
    result = []
    for vector in vectors:
        if vector <= tolerance:
            result.append(True)
        else:
            result.append(False)
    if True in result:
        first_match_index = result.index(True)
        name = known_face_names[first_match_index]

    else:
        name = "Unknown"
    face_names.append(name)

for (top, right, bottom, left), name in zip(face_locations_list, face_names):
         cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
         cv2.rectangle(frame, (left, bottom - 30), (right, bottom), (0, 255, 0), cv2.FILLED)
         cv2.putText(frame, name, (left + 2, bottom - 2), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 0), 1)

for shape in landmarks_list:
    for (x, y) in shape:
           cv2.circle(frame, (x, y), 1, (255, 0, 255), -1)

if name == 'main': print('[INFO] Importing faces...') face_to_encode_path =['input/ahmed12.png'] known_face_encodings = [] for face_to_encode_path in face_to_encode_path: image = PIL.Image.open(face_to_encode_path) image = np.array(image) face_encoded = encode_face(image)[0][0] known_face_encodings.append(face_encoded) known_face_names = ["taleb"] print('[INFO] starting webcam ...') video_capture = cv2.VideoCapture(0) print('[INFO] Webcam well started ') print('[INFO] Detecting...') while True: ret,frame = video_capture.read() easy_face_reco(frame,known_face_encodings,known_face_names) cv2.imshow('live Reconnaissance faciale App', frame) if cv2.waitKey(1) & 0xFF == ord('q'): break print('[INFO] Stopping System') video_capture.release() cv2.destroyAllWindows() il afficher l'erreur suivant; Traceback (most recent call last): File "C:/easy_facial_recognition-master/tareq.py", line 79, in face_encoded = encode_face(image)[0][0] IndexError: list index out of range

Dipanshu72 commented 3 years ago

sir after removing 0 from here encode = face_recognition.face_encodings(img) then i got a error at matches = face_recognition.compare_faces(encodeListKnown, encodeFace)

VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return np.linalg.norm(face_encodings - face_to_compare, axis=1) Traceback (most recent call last): File "d:\face-recognition\AttendanceProject.py", line 46, in matches = face_recognition.compare_faces(encodeListKnown, encodeFace) File "C:\Python39\lib\site-packages\face_recognition-1.4.0-py3.9.egg\face_recognition\api.py", line 226, in compare_faces return list(face_distance(known_face_encodings, face_encoding_to_check) <= tolerance) File "C:\Python39\lib\site-packages\face_recognition-1.4.0-py3.9.egg\face_recognition\api.py", line 75, in face_distance return np.linalg.norm(face_encodings - face_to_compare, axis=1) ValueError: operands could not be broadcast together with shapes (4,) (128,)

Dipanshu72 commented 2 years ago

You have to install some library

On Fri, Jul 15, 2022, 11:46 AM phucsin @.***> wrote:

hello i have some problem need help my code run normally but when scanning the face it does not show the name i need but only the number of the id Here is my code id = 0 names related to ids: example ==> Marcelo: id=1, etc

names = ['none','loc','tien'] Initialize and start realtime video capture

cam = cv2.VideoCapture(0) cam.set(3, 640) # set video widht cam.set(4, 480) # set video height Define min window size to be recognized as a face

minW = 0.1 cam.get(3) minH = 0.1cam.get(4) while True: ret, img =cam.read() img = cv2.flip(img, -1) # Flip vertically gray = cv2.cvtColor(img,cv2.COLORBGR2GRAY) faces = faceCascade.detectMultiScale( gray, scaleFactor = 1.5, minNeighbors = 5, minSize = (int(minW), int(minH)), ) for(x,y,w,h) in faces: cv2.rectangle(img, (x,y), (x+w,y+h), (0,255,0), 2) id, confidence = recognizer.predict(gray[y:y+h,x:x+w])

Check if confidence is less them 100 ==> "0" is perfect match

if (confidence < 95): name = names[id] confidence = " {0}%".format(round(100 - confidence)) GPIO.output(relay, 0)

— Reply to this email directly, view it on GitHub https://github.com/ageitgey/face_recognition/issues/178#issuecomment-1185212545, or unsubscribe https://github.com/notifications/unsubscribe-auth/AM2QTRZT74HBWHFWUK5TZYDVUD62BANCNFSM4D3H7JQA . You are receiving this because you commented.Message ID: @.***>