fsotoc / FaReT

Face Research Toolkit
GNU General Public License v3.0
12 stars 4 forks source link

FaReT

Face Research Toolkit: A free and open-source toolkit of three-dimensional models and software to study face perception.

Please use the following reference when you use FaReT in your research:

Hays, J. S., Wong, C., & Soto, F. (2020). FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behavior Research Methods, 5.(6), 2604-2622.

If you find FaReT useful for your research, please consider supporting the developers who maintain and develop MakeHuman, the free and open source 3D computer graphics software that FaReT uses to create face stimuli. You can do that in their Patreon page

Table of contents

Installation

Install pre-requisites

Download FaReT

MakeHuman plugins

GIMP Plugin

"Morphing" in 3d shape space

Interpolation (i.e., "morphing")

Total Frames

Interpolation Frames

Interpolation Settings

Extrapolation (i.e., "caricaturing")

Creating average models

Standardizing models

Creating dynamic animations from render sequences

Using GIMP plugin to create GIF files

GUI

If you only need to produce one GIF file, this is the easiest way to do it.

Python-Fu

If you want to produce several GIF files, then this is the best way to do so.

call the function

mass_makegifs("C:/path-to-png-folders/", "C:/path-to-png-folders/MH{0:04d}.gif", 33.33333)


## Using ImageJ to create AVI files
* Sometimes you might want to create AVI files rather than GIFs, because it is easier to display AVIs in some experimental software (e.g., Psychopy) and/or because creating AVIs allows more control over the number of frames per second in the resulting animation.
* Download ImageJ from [this website](https://imagej.net/Downloads) and run it.
* Drag the folder containing the sequence of images previously created with FaReT and drop it on the ImageJ windows.
* A prompt will appear asking if you want to open all the images in the folder as a stack. Click on "Yes".
* Go to **File > Save as > AVI**
* Choose your preferred compression and animation speed (frames per second) and proceed.
* Name your file and save.

# Communicating with PsychoPy to render faces online
The Socket Render plugin is made for telling MakeHuman what to do from PsychoPy or any Python project.

## PsychoPy Installation
Copy these files next to your PsychoPy experiment file (or into the site-packages library that PsychoPy uses):
* communicator.py
  * MakeHuman also needs communicator.py, so do not use Cut to move it out of 4_socket_render: Copy it.
* py_client.py

## Setting up MakeHuman
* In MakeHuman, navigate to the Rendering tab at the top of the window.
* Select the "Socket Render" subtab.
* When you are ready to open a connection to PsychoPy, push the button labeled "Socket Render", which starts a local server that is waiting for the py_client to connect.  
  * MakeHuman will appear to "stall" while it is waiting for input from the Python client: you cannot interact directly with MakeHuman's GUI while it is taking instructions from the Python client.
* Make sure that you start the server before running the PsychoPy experiment.

## Setting up PsychoPy
Within PsychoPy, you need to import the PythonMHC communication class.
```python
from py_client import PythonMHC
makehuman = PythonMHC()

If you want to avoid having to restart MakeHuman every time you exit a PsychoPy run, add this to the beginning of the experiment script as well:

import atexit
# when the session ends, close the link, but keep the server alive, waiting for the next PsychoPy run.
# at the end of a run, makehuman.close() will send the string, 'exit', 
#  to tell MakeHuman's server to wait for another connection from PsychoPy.
atexit.register(makehuman.close)

Now you have a connection with MakeHuman from PsychoPy! The most important function in py_client.py is execute_MH():

This is an example of how you could load a model.

filename = "C:/Example/Model.mhm" # the absolute path
makehuman.execute_MH("gui3d.app.loadHumanMHM", False, False, filename)

However, for your convenience, some functions -- like load_model() -- are set up ahead of time:

# makehuman.*function* functions almost all wrap around execute_MH()
makehuman.load_model(filename)
# make the camera look at the face
makehuman.setFaceCamera()

# zoom out by "10"
makehuman.zoom(10)
# zooming in uses negative numbers.
#makehuman.zoom(-10)

# set the orbit camera to the left.
makehuman.setCamera(0,45)
# retrieve the shape parameter dictionary
params = makehuman.get_model_params()

# alter the params so that they have a large forehead
params['forehead/forehead-scale-vert-decr|incr'] = 1
# set and update the model's shape parameters
makehuman.set_model_params(params)

emotion_file = "C:/Example/Emotion.mhpose"
# load the expression parameters for neutral and some emotion (as specified by an mhpose file).
neutral, emotion = makehuman.load_expression(emotion_file)
# set an emotional expression at a specific percentage
makehuman.set_expression(neutral, emotion, 50.0) # 0.0 would be purely neutral, 100 would be "fully" expressing the emotion.

# you can specify how you want MakeHuman to render each stimulus.
render_settings = dict()
render_settings['AA'] = True#/False #anti-aliasing -- smoothing by rendering at a larger scale and then downscaling
render_settings['dimensions'] = (256, 256) # how big is the image
render_settings['lightmapSSS'] = False # do you want cool, slow to render lighting effects?

save_location = "C:/Example/Image_Folder/"
image_number = 0
# Ask MakeHuman to render and save whatever to the save location, 
#  and wait until MakeHuman finishes before moving on.
image_path = makehuman.get_render(save_location, render_settings, image_number)
# you only need to increment the image number want to reserve
# the previously rendered image for the next time
# (or if you are going to render multiple images in one trial).
#image_number+=1

# $image_path can be given to ImageStim components as long as the Image is set every repeat.
# expression parameters are separate from shape parameters.
# if you do not want to load mhpose files, neutral always uses 0's
neutral = dict(RightInnerBrowUp=0)
# however, you don't _have_ to use neutral as a starting point,
#  so you can change what the interpolation percentages
#  mean by altering the starting point.
# other = dict(RightInnerBrowUp=.5)
brow_expression = dict(RightInnerBrowUp=1)
# the arguments are: starting point, ending point, percentage.
makehuman.set_expression(neutral, brow_expression, 75)
# if you just want to set an expression without interpolating, 
# you can use the same one twice at 100 percent.
# makehuman.set_expression(brow_expression, brow_expression, 100)

About camera controls:

# rotate camera is relative to the current position,
# but set camera takes the current position into account to negate it.
# doing:
makehuman.setCamera(0,0)
makehuman.rotateCamera(0,45)
makehuman.rotateCamera(0,45)
# is the same as:
makehuman.setCamera(0,90)

More about shape parameters:

# shape parameter dictionaries do not need to be loaded from .mhm files.
params = dict()
params['forehead/forehead-temple-decr|incr'] = 1
# this doesn't override other unspecified shape parameters.
makehuman.set_model_params(params)

Expression parameters:

# expression parameter dictionaries do not need to be loaded from .mhpose files either.
params = dict()
brow_expression['LeftInnerBrowUp'] = 1
# unlike shape parameters, this does override other unspecified expression parameters.
makehuman.set_expression(brow_expression,brow_expression,100)

IMPORTANT:

  1. MakeHuman does not recognize numbers created using Numpy that are type np.int64, np.float64, etc. If you have created values for parameters using Numpy functions, make sure to use float(your_numpy_number) as input to the MakeHuman dictionaries.
  2. Use absolute paths for your model files and for the folder where your rendered stimuli will be saved. Relative paths sometimes produce errors.

If you want to kill the server without terminating MakeHuman's process, you can send the string 'shutdown' to resume MakeHuman's normal GUI-based operations.

# unlike when MakeHuman receives the 'exit' string (which only indicates that 
#  the PsychoPy/Python client has left),
# shutting down the MakeHuman server means you will have to click the "Socket Render"
# button again before you want to start the next PsychoPy Run.
makehuman.send('shutdown')

Generating a random face model in each trial and rendering it online

The code below uses code from the random face generator plugin to create a random face model, then opens a connection with MakeHuman to render the created model. It can be used to generate and display a different random face in each trial of an experiment, with control over what features are fixed vs. random. You would usually want to do this ahead of the experiment using the random face generator plugin within MakeHuman. The example below would be useful if there are too many random faces to render (e.g., trial-unique stimuli with manipulation of other factors, such as expression).

# the face_generator.py file is from the face generator plugin
import face_generator as fg
import numpy as np
import os
from glob import glob
# the socket plugin comes with py_client.py
from py_client import PythonMHC

# first, setup your directories and load your pre-existing sample models:
# the psychopy project directory
path = "C:\\Users\\jason\\Documents\\_Research\\makehuman\\socket_tester"
# where are you saving your mhm files?
out_model_path = os.path.join(path,"generated_models")

# where are the pre-established models (aside from the average identity)?
model_samples = os.path.join(path,"sample_models")
models_list = glob(os.path.join(model_samples, "*.mhm"))

# where are you saving your images?
img_path = os.path.join(path,"renders")

# set the path/name to your average identity
avg_path = os.path.join(path,"identity_average.mhm")
# load the average identity's parameters
avg = fg.read_params(avg_path)

# load the other models' parameters
full_params = [fg.read_params(f) for f in models_list]

# load the keys, the average, and the rest of the models
all_stuff = fg.get_ordered_values(avg, *full_params)
# the keys for the features being changed
keys = all_stuff[0]
# the average face parameters (similar to avg, but it includes 0's for key values that exist in the other model files)
avg_face = all_stuff[1]
face_arr = np.array(all_stuff[2:])
radius, face_arr = fg.set_faces_to_radius(avg_face, face_arr)

# get ready to actually make models and render things.
# what model_number are you on now?
model_number = 0
# these are the render settings
# you may want a different image size
size = (300,300)
settings = dict(AA=True, dimensions=size, lightmapSSS=True)

# start the socket client (do this after you already pressed the Socket Render button in MakeHuman)
makehuman = PythonMHC()

# make and render 1 face per trial or stimulus
# (could use this "try-except" in a code component before a presentation trial routine 
#  or you could put it into a loop to make stimulus models and images ahead of time)
try:
    # make a model (depending on your experiment, you could make a bunch ahead of time, 
    #  but then you may as well use the Generate Faces plugin on the GUI instead)
    faceX = fg.make_new_face(avg_face, face_arr)

    # save a model to model_path
    model_path = os.path.join(out_model_path, "face{0:04d}.mhm".format(model_number))
    fg.write_mimic_file(avg_path, model_path, keys, faceX)

    # load the model into makehuman
    makehuman.load_model(model_path)
    # render the model
    # use the "image_path" variable in a PsychoPy ImageStim image instead of printing it
    image_path = makehuman.get_render(img_path, settings, model_number)
    #print(image_path)

    # get ready for the next model
    model_number+=1

except Exception as e:
    print(e)
    makehuman.send("shutdown")
#finally:
# at the end of the experiment (or the end of your code), you can shutdown and close the socket so that makehuman will 
#  function normally until you hit the Socket Render button again.
makehuman.send("shutdown")