ageitgey / face_recognition

The world's simplest facial recognition api for Python and the command line
MIT License
53.24k stars 13.48k forks source link

Also running out of memory #699

Open pmi123 opened 5 years ago

pmi123 commented 5 years ago

face-recognition==1.2.3, face-recognition-models==0.3.0 Python version: 3.4.3 Operating System: Ubuntu 14.04 Django 2.0.5 Celery 4.2.1 Redis 2.10.6

Description

I am still getting the out of memory errors after recognizing 2-5 faces/per image in multiple images, one after the other. When I say "one image after the other", I mean it takes about 30-60 sec for me to select the next image from the disk and load it into the program. I was hoping to make this last step automatic by loading multiple images at a time, but this memory error makes that impossible.

The error I get is

[2018-12-08 09:29:15,604: WARNING/ForkPoolWorker-5] Traceback (most recent call last):
[2018-12-08 09:29:15,604: WARNING/ForkPoolWorker-5] File "/home/mark/python-projects/memorabilia-JSON/biometric_identification/tasks.py", line 75, in find_faces_task
    face_locations = face_recognition.face_locations(image, model="cnn", number_of_times_to_upsample=0)
[2018-12-08 09:29:15,604: WARNING/ForkPoolWorker-5] File "/home/mark/.virtualenvs/memorabilia-JSON/lib/python3.6/site-packages/face_recognition/api.py", line 116, in face_locations
    return [_trim_css_to_bounds(_rect_to_css(face.rect), img.shape) for face in _raw_face_locations(img, number_of_times_to_upsample, "cnn")]
[2018-12-08 09:29:15,605: WARNING/ForkPoolWorker-5] File "/home/mark/.virtualenvs/memorabilia-JSON/lib/python3.6/site-packages/face_recognition/api.py", line 100, in _raw_face_locations
    return cnn_face_detector(img, number_of_times_to_upsample)
[2018-12-08 09:29:15,605: WARNING/ForkPoolWorker-5] MemoryError: std::bad_alloc

My system resources from top:

top - 10:59:59 up 3 days, 17:13,  1 user,  load average: 1.40, 1.33, 1.26
Tasks: 368 total,   4 running, 289 sleeping,   0 stopped,   2 zombie
%Cpu(s):  0.2 us, 12.0 sy,  0.0 ni, 87.1 id,  0.0 wa,  0.0 hi,  0.7 si,  0.0 st
KiB Mem : 16343740 total,  4346864 free,  9669116 used,  2327760 buff/cache
KiB Swap:   999420 total,       60 free,   999360 used.  5877912 avail Mem 

I seem to have ~4.3 GB of free memory, so I am not sure if adding more memory is the answer.

What I Did

I am running the face recognition in a celery task, called from my django app, where I select the image to load, which is on disk.

image = face_recognition.load_image_file(image_path)
face_locations = face_recognition.face_locations(image, model="cnn", number_of_times_to_upsample=0)
face_encodings = face_recognition.face_encodings(image, known_face_locations=face_locations)

I then save the face_locations and face_encodings to a mysql db for later comparison to some known faces.

Each of the images are 3 - 4 MB in size (See examples in #670)

I am using model='cnn' based on your recommendation in #670, because without cnn some faces in my test set were not being found. Now they are found, but I need to reset the celery application after each face

I noticed that you mention image size as a possible cause of this problem (#691). Do you have any insights on the relationship between image size and recognition accuracy? I was thinking perhaps to use PIL to reduce the size of the image before it is loaded into face_recognition, but I am not sure of an 'optimal' image size, and how reducing the size would impact the accuracy of finding and recognizing faces. Do you have any recommendations on an optimal image size for creating face locations and face encodings? Or, perhaps a range of sizes?

Also, when doing facial recognition, does the unknown image need to be close to the resolution of the images used to create the encoding and location? Or, are encoding and location independent of resolution?

I don't see any "clean-up" commands to run after the face_recognition is done to free up memory in the api docs. Are there any such commands or options?

Thanks!

Mark

stefiHB commented 5 years ago

I'm Having the same problem. Did you find the way to fix it ?

tomasn4a commented 5 years ago

Not sure if related, but I'm also having problems with GPU memory. I process images one by one and I'm actually able to process around 700 hundred before hitting the out of memory error. But once that happens that's it. Running nvidia-smi shows 100% GPU utilization even though no processes are running and if I try to run my script again on a new batch of images, it's not able to process not even one.

pmi123 commented 5 years ago

It seemed to be related to the use of swap space. My system (Ubuntu 18.10) showed lots of free RAM while processing the images, but ran out of swap after about 3 images. I looked at this Stackoverflow post - https://askubuntu.com/questions/1357/how-to-empty-swap-if-there-is-free-ram

Good luck and please post if you find a solution/work around!

Mark

On Fri, Jan 25, 2019 at 6:06 AM stefiHB notifications@github.com wrote:

I'm Having the same problem. Did you find the way to fix it ?

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ageitgey/face_recognition/issues/699#issuecomment-457566767, or mute the thread https://github.com/notifications/unsubscribe-auth/AC3PDxwtl6zIDRj2iSXGR0fPQuxflEVFks5vGwE4gaJpZM4ZJw6M .