albertpumarola / GANimation

GANimation: Anatomically-aware Facial Animation from a Single Image (ECCV'18 Oral) [PyTorch]
http://www.albertpumarola.com/research/GANimation/index.html
GNU General Public License v3.0
1.96k stars 413 forks source link

about the training data #45

Open jackylee1 opened 6 years ago

jackylee1 commented 6 years ago

i pulled the openface with docker. and test with the command build/bin/FaceLandmarkImg -f sample1.jpg build/bin/FaceLandmarkImg -f sample2.jpg etc. and it produced a directory name processed with several files. sample1_aligned sample1_of_details.txt sample1.csv sample1.hog sample1.jpg(with annotation) sample2_aligned sample2_of_details.txt sample2.csv sample2.hog sample2.jpg(with annotation) and so on. and then i want to generate the aus_openface.pkl with the following command python data/prepare_au_annotations.py and the command need some arguments.so instead i us p ython data/prepare_au_annotations.py --input_aus_filesdir processed(as produced before with several files) --output_path outpkl and there is a file named aus.pkl which is not as you described aus_openface.pkl. in order to continue,i just rename it to aus_openface.pkl. and below i just make a directory named mydata inside the mydata is imgs folder and aus_openface.pkl,train_ids.csv,test_ids.csv within the imgs folder,i don't know to put the original sample.jpg or put the image within sample1_aligned folder .and in sample_aligned folder is different image but the same name face_det_000000.bmp. i am quite confused. i want to know that which images should i put in the imgs folder,and should i just change the pkl name?are there any thing i do wrong in the process? forgive my poor expression,thank you for your patient guidance

DHPO commented 5 years ago

use build/bin/FaceLandmarkImg -f sample1.jpg -aus, or it will output other attributes so prepare_au_annotations.py cannot properly handle it

c1a1o1 commented 5 years ago

How long does it take to apply for EmotionNet data set?

jangho2001us commented 4 years ago

How long does it take to apply for EmotionNet data set?

Hi, preprocessing with OpenFace on EmotioNet is very slow... I strongly recommend you to write multiprocessing script with python. In addition, OpenFace does not correctly working on some images because those images are too small to detect a face.