GilLevi / AgeGenderDeepLearning

628 stars 284 forks source link

Random age and gender predictions #4

Closed vigneshmahalingam closed 6 years ago

vigneshmahalingam commented 8 years ago

Hi,

Thanks for making such an awesome project.

I got all the setup and everything working on my Mac book pro running el Capitan. I predicted the example image which is "example_image.jpg" and it gave out the correct result (Female and age group -0,2).

But I was trying to predict some other images which I googled and the results are kind of inconsistent. Is there something which I am missing ? Should the input images be of a particular resolution or are there any constraints that must be met on the input image that needs to be predicted?

I am attaching the image which I tried to predict along with the console log

oldman

out.txt

vigneshmahalingam commented 8 years ago

This is just one sample. I tried nearly 20 images which I downloaded from the internet. All frontal faces. Most of the times I was getting the wrong age and gender.

Help greatly appreciated. Thanks!

GilLevi commented 8 years ago

Hi,

Thank you for your interest in our work.

It sounds disturbing that the models couldn't predict the correct labels for none of the images. Could you please mail me the images and the code you used for predicting? I'll run it again the code I'm using to predict the labels and I'll try to figure our what's wrong.

My mail is gil.levi100@gmail.com

Best, Gil

vigneshmahalingam commented 8 years ago

Thanks very much Gil. I have mailed you the script, sample images used for testing. Also the actual and predicted results.

I think I must be doing something terribly wrong in the configuration. :)

vigneshmahalingam commented 8 years ago

By the way,

I am running on Mac OSX 10.11 (el capitan)

Open CV 3.1.0 Caffe (The latest and greatest from Caffe git) python 2.7.11

I also changed the io.py as per the instructions given on your site.

Thank you.

vigneshmahalingam commented 8 years ago

A big thanks to Gil.

For other folks who face the same problem, here is the summary of the mail communication between myself and Gil:

The prediction gets better if the faces from images are cropped and aligned. So instead of feeding in just raw images to the predictor, we are going to cut the faces and align them.

This is the piece of code which does that

detector = dlib.get_frontal_face_detector()

start_time = time.time() im_name = 'teengirl.jpg' img = io.imread(os.path.join('./',im_name)) faces = detector(img)

input_image = caffe.io.load_image(os.path.join('./',im_name)) cropped_face = input_image[faces[0].top():faces[0].bottom(),faces[0].left():faces[0].right(),:] h = faces[0].bottom() - faces[0].top() w = faces[0].right() - faces[0].left()

F = 0.1 cropped_face_big = input_image[faces[0].top() - h_F :faces[0].bottom() +h_F ,faces[0].left()- w_F:faces[0].right()+w_F,:]

prediction = age_net.predict([cropped_face_big]) print 'predicted age:', age_list[prediction[0].argmax()]

prediction = gender_net.predict([cropped_face_big]) print 'predicted gender:', gender_list[prediction[0].argmax()]

pythonanonuser commented 8 years ago

Could you explain how the code you posted alligns the images? It seems to me as you may just be expanding the cropped area as dlib's face detector returns a much more narrow crop that doesn't display the entire head.

What's also confusing to me is why you perform the first crop and save it into cropped_face. Doesn't seem like you use that variable at all in the code.

GilLevi commented 8 years ago

Hi @12rohanb,

The code posted does not align the images, it only crops that face. And you are write, cropped_face is indeed not used (well, it's just a quick snippet).

Best, Gil

gautamsingaraju commented 8 years ago

Gil, Did you use opencv2 haarcascade or dlib to extract faces? Could you also please talk about your data preperation state with face alignment settings? Gautam

GilLevi commented 8 years ago

Hi, I used the aligned faces already given in the dataset. The details about the alignment are given in the project page: http://www.openu.ac.il/home/hassner/Adience/data.html

and in the original paper: Eidinger, Eran, Roee Enbar, and Tal Hassner. "Age and gender estimation of unfiltered faces." IEEE Transactions on Information Forensics and Security 9.12 (2014): 2170-2179.‏

victoriastuart commented 7 years ago

@vigneshmahalingam: thank you for sharing but that code is missing import statements and has bugs (e.g. undeclared variable h_F) ... :-(

victoriastuart commented 7 years ago

With regard to my comment to @vigneshmahalingam (above), this script works. Be sure to edit your image, other paths as indicated, per your local installation.

#!/usr/bin/env  python
# coding: utf-8

# ----------------------------------------------------------------------------
# CROP IMAGES VIA DLIB:
# ---------------------

# Source: vigneshmahalingam via Gil Levi (pers. comm., indicated); posted at
# https://github.com/GilLevi/AgeGenderDeepLearning/issues/4

# Edited by Victoria Stuart
# Environment: Python 2.7 venv: {Caffe '1.0.0-rc3' | Dlib '19.2.99'} installed ...

# ----------------------------------------------------------------------------
# PYTHON 2 FUTURE PRINT FROM PYTHON 3:
# ------------------------------------

# must appear at top of script:
from __future__ import print_function

# ----------------------------------------------------------------------------
# CAFFE VERBOSITY (WARNINGS):
# ---------------------------

# Place near top of script, BEFORE "import caffe" statement:

import os
os.environ['GLOG_minloglevel'] = '2'

# 0: debug; 1:info | 2:warnings; 3: errors
#os.environ['GLOG_minloglevel'] = '2'

# ----------------------------------------------------------------------------
# (REMAINING) IMPORT STATEMENTS:
# ------------------------------

import caffe, dlib, io

import matplotlib.pyplot as plt

from skimage import io

# ----------------------------------------------------------------------------
# LOAD THE MEAN IMAGE:
# --------------------

mean_filename='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/mean.binaryproto'
proto_data = open(mean_filename, "rb").read()
a = caffe.io.caffe_pb2.BlobProto.FromString(proto_data)
mean  = caffe.io.blobproto_to_array(a)[0]

# ----------------------------------------------------------------------------
# LOAD THE AGE, GENDER NETWORKS:
# ------------------------------

# NOTE: be sure to make the "/mnt/Vancouver/apps" path section per your local,
#       cloned "AgeGenderDeepLearning" GitHub repo

age_net_pretrained='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/age_net.caffemodel'
age_net_model_file='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/deploy_age.prototxt'
age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
                       mean=mean.mean(1).mean(1),
                       channel_swap=(2,1,0),
                       raw_scale=255,
                       image_dims=(256, 256))

gender_net_pretrained='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/gender_net.caffemodel'
gender_net_model_file='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/deploy_gender.prototxt'
gender_net = caffe.Classifier(gender_net_model_file, gender_net_pretrained,
                       mean=mean.mean(1).mean(1),
                       channel_swap=(2,1,0),
                       raw_scale=255,
                       image_dims=(256, 256))

# ----------------------------------------------------------------------------
# LABELS:
# -------

#age_list=['(0, 2)','(4, 6)','(8, 12)','(15, 20)','(25, 32)','(38, 43)','(48, 53)','(60, 100)']
age_list=['00-02','04-06','08-12','15-20','25-32','38-43','48-53','60-100']

gender_list=['Male', 'Female']

# ----------------------------------------------------------------------------
# CROP FACE VIA DLIB:
# -------------------

detector = dlib.get_frontal_face_detector()

# Victoria: "img21" is a file (me!) on my system  ;-)
img21 = '/home/victoria/projects/computer_vision/cnn_age_gender_demo/images_uncropped/victoria-50.jpg'
im_name = img21

img = io.imread(os.path.join('./',im_name))

faces = detector(img)

input_image_cropped = caffe.io.load_image(os.path.join('./', im_name))

cropped_face = input_image_cropped[faces[0].top():faces[0].bottom(), faces[0].left():faces[0].right(), :]

h = faces[0].bottom() - faces[0].top()
w = faces[0].right() - faces[0].left()

age_prediction_cropped = age_net.predict([cropped_face])

print('\n\t   predicted age (Dlib-cropped image):', age_list[age_prediction_cropped[0].argmax()])

gender_prediction_cropped = gender_net.predict([cropped_face])
print('\tpredicted gender (Dlib-cropped image):', gender_list[gender_prediction_cropped[0].argmax()])

plt.figure("cropped_face")
plt.imshow(cropped_face)
plt.imsave('cropped_face.png', cropped_face)

# ----------------------------------------------------------------------------
# Plot figures [via "plt.figure()" statements, above]:

plt.show()
victoriastuart commented 7 years ago

Age, gender classification is understandably very challenging! Just Google [ microsoft age gender accuracy] ...

While the script works, (unless I'm doing something horribly wrong) the age / gender classifications are truly terrible! :-(

With a different version of the code, above, I've tried / compared:

* raw (unscaled) images

* [96x96] aligned / cropped faces from those images, using the Dlib script
  (.../openface/util/align-dlib.py) provided in my OpenFace install; and

* the Dlib code snippet above (larger images: cropped faces)

... all these images (source; two different crops) give different results -- really, no apparent correlation with one another or ground truth! :-(

Ground truth:

* Victoria: female (transsexual), age 55
* Carmine: female, age 42
Age, gender predictions: uncropped vs cropped [96x96] images
------------------------------------------------------------
IMAGE  |  PREDICTION:   AGE     GENDER
------------------------------------------------------------
example_image           00-02   female
carmine-01              08-12   female
carmine-01 (cropped)    38-43   female
carmine-02              05-20   female
carmine-02 (cropped)    38-43   female
carmine-03              45-53   female
carmine-03 (cropped)    00-02   female
carmine-04              08-12   female
carmine-04 (cropped)    08-12   female
carmine-05              00-02   female
carmine-05 (cropped)    25-32   female
carmine-06              38-43   female
carmine-06 (cropped)    38-43   female
carmine-07              08-12   female
carmine-07 (cropped)    25-32     male [misgendered]
carmine-08.             25-32   female
carmine-08 (cropped)    08-12   female
carmine-09              60-100  female
carmine-09 (cropped)    08-12     male [misgendered]
carmine-11              25-32   female
carmine-11 (cropped)    25-32   female
carmine-36              00-02   female
carmine-36 (cropped)    38-43   female
---------------------------------------
victoria-01             08-12   female
victoria-01 (cropped)   08-12   female
victoria-03             00-02   female
victoria-03 (cropped)   08-12   female
victoria-04             25-32   female
victoria-04 (cropped)   38-43   female
victoria-05             25-32   female
victoria-05 (cropped)   08-12   female
victoria-06             60-100  female
victoria-06 (cropped)   45-53   female
victoria-07             25-32   female
victoria-07 (cropped)   00-02   female
victoria-08             08-12   female
victoria-08 (cropped)   08-12   female
victoria-09             60-100  female
victoria-09 (cropped)   38-43   female
victoria-10             25-32   female
victoria-10 (cropped)   25-32   female
victoria-50             38-43     male [misgendered]
victoria-50 (cropped)   08-12   female
victoria-24             08-43     male [misgendered]
victoria-24 (cropped)   38-43   female
victoria-39             00-02   female
victoria-39 (cropped)   00-02   female
victoria-40             38-43     male [misgendered]
victoria-40 (cropped)   60-100  female
------------------------------------------------------------
GilLevi commented 7 years ago

Hi @victoriastuart ,

Thanks for sharing the corrected code!

The results indeed don't look accurate at all.

When you initiate the network, why did you used mean.mean(1).mean(1)? when you try mean=mean, do you get an exception?

Thanks! Gil

victoriastuart commented 7 years ago

Hi Gil! In my home version(s) of the script, I actually modified the code (not the script) -- it was just easier for the online posts to post the 'amended script', without explaining the other option (source code edit):

    # ----------------------------------------------------------------------------
    # LOAD THE AGE NETWORK:
    # ---------------------

    age_net_pretrained='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/age_net.caffemodel'
    age_net_model_file='/mnt/Vancouver/apps/AgeGenderDeepLearning/cnn_age_gender_models_and_data.0.0.2/deploy_age.prototxt'
    age_net = caffe.Classifier(age_net_model_file, age_net_pretrained,
                           mean=mean,                   ## << throws "ValueError: Mean shape incompatible with input shape."
                           #mean=mean.mean(1).mean(1),    ## << CORRECTED LINE!
                           channel_swap=(2,1,0),
                           raw_scale=255,
                           image_dims=(256, 256))

    """THAT ERROR (above),
        ValueError: Mean shape incompatible with input shape.
    is addressed here:

        https://groups.google.com/forum/#!msg/caffe-users/C1J5cO54oRE/bSOT3EViAgAJ

        SO solution! http://stackoverflow.com/questions/30808735/error-when-using-classify-in-caffe

        Change this:
             mean=np.load(mean_file)
        to this:
             mean=np.load(mean_file).mean(1).mean(1)

    UPDATE -- SEE:
    --------------

        https://github.com/BVLC/caffe/issues/2594
        http://stackoverflow.com/questions/30808735/error-when-using-classify-in-caffe

    Victoria: per that SO article I patched the Caffe file:

        /mnt/Vancouver/apps/caffe/python/caffe/io.py

    Either solution works (identically)! While editing this script (although 'local',
    not 'global' solution) is the simplest 'solution,' I went ahead with the
    '/mnt/Vancouver/apps/caffe/python/caffe/io.py' modification, per SO, above.

    I posted my solution here:

        cnn_age_gender_demo: "ValueError: Mean shape incompatible with input shape."
        https://groups.google.com/forum/#!topic/caffe-users/rzUcMs9Hmsc
    """

... so, I ran

mean=mean

not

mean=mean.mean(1).mean(1)

(although I believe I get identical results with either).

I did get an exception for "mean=mean", until I did either of those two edits (script or source code).

Thanks again for sharing your work: v. much appreciated!

victoriastuart commented 7 years ago

Hello again! A few quick questions.

img04a : carmine-03a.jpg : 48-53; female  |  source image [color; 700x750]; 42 y.o female

img04b : carmine-03b.png : 48-53; female  |  saved as PNG image [color; 700x750]

img04c : carmine-03c.jpg : 48-53; female  |  resized 50% [350x375], saved as optimized JPG [default, all others unless o/w specified], 80% compression
img04d : carmine-03d.jpg : 48-53; female  |  resized 50% [350x375], saved as progressive JPG, 80% compression

img04e : carmine-03e.jpg : 48-53; female  |  auto-cropped (gthumb photo editor); no resize [466x500]
img04f : carmine-03f.jpg : 48-53; female  |  auto-cropped (gthumb photo editor); resizes [288x309]
img04g : carmine-03g.jpg : 48-53; female  |  auto-cropped (gthumb photo editor); resizes [96x103]

img04h : carmine-03h.jpg : 08-12; female  |  orig. grayscaled (brightness); [700x750]
img04i : carmine-03i.jpg : 08-12; female  |  orig. grayscaled (average); [700x750]

img04j : carmine-03j.jpg : 48-53; female  |  source flipped horizontally
img04k : carmine-03k.jpg : 48-53; female  |  source flipped vertically!

img04l : carmine-03l.jpg : 04-06; female  |  manual 'loose' crop (retains much color), [314x310]

img04m : carmine-03m.jpg : 25-32; female  |  increase red channel (0 --> 99 on 0-99 range)
img04n : carmine-03n.jpg : 04-06; female  |  decrease red channel (0 --> -99 on 0 - -99 range)
img04o : carmine-03o.jpg : 48-53; female  |  increase red channel (50%: 0 --> 50 on 0-99 range)
img04p : carmine-03p.jpg : 04-06; female  |  increase green channel (50%: 0 --> 50 on 0-99 scale)

Selected images attached, for your reference ... :-)

carmine-03a

carmine-03e

carmine-03h

carmine-03l

carmine-03n

victoriastuart commented 7 years ago

The order of those images (above) is:

Image                                          Prediction
--------------------------------------------------------------------------------
carmine-03a [source; 700x750]                  48-53 female
carmine-03e [gthumb auto-crop: 466x500]        48-53 female
carmine-03h [grayscaled; 700x750]              08-12 female
carmine-03l [manual crop; 314x310]             04-06 female
carmine-03n [red channel decreased; 700x750]   04-06 female
GilLevi commented 7 years ago

Hi Victoria,

The network was trained on color images. It expects color cropped faces. The size doesn't matter since the images are resized to 256x256. Horizontal flips shouldn't change the prediction if you're running it in "oversample" mode, but vertical flips will confuse the network.

Have you tried running it with the modified "io.py" code from the project page?

http://www.openu.ac.il/home/hassner/projects/cnn_agegender/io.py

Best, Gil

victoriastuart commented 7 years ago

Hi Gil! The two io.py versions (my patch of my Caffe installation; your patch, from the web link above) appear to be performing identically:

UPDATED io.py:

IMAGE               OLD io.py       NEW io.py
--------------------------------------------------
carmine-03a.jpg     48-53; female   48-53; female
carmine-03b.png     48-53; female   48-53; female
carmine-03c.jpg     48-53; female   48-53; female
carmine-03d.jpg     48-53; female   48-53; female
carmine-03e.jpg     48-53; female   48-53; female
carmine-03f.jpg     48-53; female   48-53; female
carmine-03g.jpg     48-53; female   48-53; female
carmine-03h.jpg     08-12; female   08-12; female
carmine-03i.jpg     08-12; female   08-12; female
carmine-03j.jpg     48-53; female   48-53; female
carmine-03k.jpg     48-53; female   48-53; female
carmine-03l.jpg     04-06; female   04-06; female
carmine-03m.jpg     25-32; female   25-32; female
carmine-03n.jpg     04-06; female   04-06; female
carmine-03o.jpg     48-53; female   48-53; female
carmine-03p.jpg     04-06; female   04-06; female

carmine-01.jpg      08-12; female   08-12; female
carmine-02.jpg      15-20; female   15-20; female
carmine-04.jpg      08-12; female   08-12; female
carmine-05.jpg      00-02; female   00-02; female

victoria-01.jpg     08-12; female   08-12; female
victoria-03.jpg     00-02; female   00-02; female
victoria-04.jpg     25-32; female   25-32; female
victoria-05.jpg     25-32; female   25-32; female
victoria-06.jpg     60-100; female  60-100; female
--------------------------------------------------

OLD io.py : patched by Victoria per A1. in: http://stackoverflow.com/questions/30808735/error-when-using-classify-in-caffe

NEW io.py : downloaded (per Gil Levi): http://www.openu.ac.il/home/hassner/projects/cnn_agegender/io.py

... Thanks, Victoria :-)

grabya commented 7 years ago

I'm having similar results with the original code. Did the paper mention success rates?

victoriastuart commented 7 years ago

Levi, G., & Hassner, T. (2015). Age and gender classification using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (pp. 34-42).

http://www.openu.ac.il/home/hassner/projects/cnn_agegender/CNN_AgeGenderEstimation.pdf levi hassner 2015 tables 2 3

[10] E. Eidinger, R. Enbar, and T. Hassner. Age and gender estimation of unfiltered faces. Trans. on Inform. Forensics and Security, 9(12), 2014. pdf

[23] T. Hassner, S. Harel, E. Paz, and R. Enbar. Effective face frontalization in unconstrained images. Proc. Conf. Comput. Vision Pattern Recognition, 2015. pdf

victoriastuart commented 7 years ago

There clearly is an effect of color (the model was trained on color images). I would love to see/have a model trained on face-cropped, grayscale images. Additionally, AFAIK, the Dlib facial crops could auto-parse those images that are excessively blurred?

Also, training (as suggested by the tables, above - Tables 2, 3) separately on age, and on gender (properly: sex). Not sure if this was done.

For simplification (realizing that sex and gender are not binary), classification on sex would be binary (male; female), trained on grayscale images. Sex (again: simplification) is largely determined by biology, and thus agnostic to color (skin tones excepted, viz a viz color; but that introduces another classification: race).

Gender -- again a simplification here -- is more a personal identity and social construct, supra to sex viz-a-viz facial features: hair length/style, accessories (ear rings, jewelry, ...), makeup, clothing etc. would all be useful features to consider, so in that case I (possibly) see the value for color as an included feature.

victoriastuart commented 7 years ago

Very sad to see/report that these SCUMBAGS have plagiarized your very nice work, virtually verbatim. No novel contributions; apparently just an application of your method with some new images/classifications tacked on. Sleezy bastards!! grrrrrrrr

PDF: http://www.ijarcsms.com/docs/paper/volume4/issue2/V4I2-0029.pdf

That "journal" is listed here: https://scholarlyoa.com/?s=IJARCSMS

These so-called "predatory journals" are a huge issue in academia, including bioscience ... apparently also computer science. :-( ... Parasites!

victoriastuart commented 7 years ago

Hey Gil ! I'm moving on to other work, personal research. This was a wonderful introduction to this area of research, methods and applications. I'd like to take this opportunity to thank you and your colleagues for being so wonderfully open and sharing of your research: very refreshing!

A special note of thanks to you, personally, for being so engaging and gracious. Good luck with your Ph.D. studies ... it's a long haul but I have every confidence in you and your success!

Best wishes,

Victoria :-)

-- Dr. Victoria A. Stuart, Ph.D.

GilLevi commented 7 years ago

Hi Victoria,

Thanks for your notes in your previous response, if you're interested you can easily pick up and implement those ideas and test the effect (I would be glad to assist you if you want). There is TONS of room for following research and improvements to our method.

Yeah, we are aware of the plagiarism of our work. A bit annoying indeed.

You are too kind:) If you would ever feel like revisiting our project (or our second project on emotion classification) please feel free to contact me:)

Best, Gil

victoriastuart commented 7 years ago

Addendum [2016-Dec-01]:

FYR, I implemented the OpenCV Torch7 age_gender demo, that is based on Gil's and Tal's cnn_agegender work (this repo)!

I'll attach two (again: lossy: byzanz animated GIF screen capture of webcam stream) animated GIFS. In this limited, "one-of" demo the predictions -- notably the ages -- are much more stable on the color photo/image, than on the grayscaled photo. [I'm without makeup today, so I'm camera-shy! :-p ]

This makes sense as these models were trained on Tal Hassner's Adience dataset, i.e. on color images.

I just thought those following this Issue thread/discussion would be interested, plus a shout-out to Gil and the OpenCV crew for this work/efforts. Very cool! :-)

byzanz capture 2016-12-01 10 37 28

byzanz capture 2016-12-01 10 45 32

I'll do a live capture this weekend when my gf (Carm) visits! :-)

Links:

GilLevi commented 7 years ago

Very Cool indeed !! Thanks for sharing:)

victoriastuart commented 7 years ago

For those interested in the topic, recent discussions of color (effects of) and neural nets here:

[D] CNN object recognition: grayscale vs RGB : MachineLearning https://www.reddit.com/r/MachineLearning/comments/5bheof/d_why_is_the_last_layer_in_yolo_a_fully_connected/

... cites:

How does convolutional Neural Network handle color images in object recognition? - Quora https://www.quora.com/How-does-convolutional-Neural-Network-handle-color-images-in-object-recognition?srid=ddVE