Closed BruceWeiii closed 2 years ago
Hi, you can follow these steps to process your data:
resize
step in test.py), and after processing, you need to resize and pad mask image to its original resolution.1-17
. Check the README in the CelebAMask-HQ to see what these labels mean. In the final mask, the parts related to the label 1-17
are 1, and the other is 0.Multi-PIE training set
part) in process.pyThen the landmarks.py and processed images and masks are saved in the save_dir
. MultiPIE testing images
and LFW testing images
processing codes are also provided.
You can get the MultiPIE from their website. If you want to process other datasets, you should:
Modify the s2f
function. This function is used to get the frontal image name.
Change the image center coordinate in the process_multipie_train
function.
Hi,Thank you very much for your reply. When I run get_landmarks.py, I got HTTP Error 403: Forbidden. I have changed the Key and API Secret in get_landmarks.py. Will it have some impact on the result?
Hi, http Error 403: Forbidden is caused by frequent api requests, and our code will repeat the failed request until all images are processed. It will not impact the result.
Hi, Sorry to bother you again. I used multipie_crop_128 dataset to test. But I got pictures like this after running process.py.
input:
target:
And the test result is:
Is this a problem of data? Could you tell me how to fix it? Thank you very much. Looking forward to your reply.
Hi, the aligned image has something wrong. Could you provide one of your landmark json files and your process code? These can help me find the problem.
B.t.w, is the resolution of your original multipie image 640x480? I suppose you have realign the aligned 128x128 images again?
Hi, I haven't the original multipie dataset image, and I used aligned 128x128 images shared by others on the Internet. And here are my landmark json files and process code. json and process file.zip
Is there a reason that the landmarks cannot just be uploaded rather than everyone re-generating it from Face++?
Hi, I haven't the original multipie dataset image, and I used aligned 128x128 images shared by others on the Internet. And here are my landmark json files and process code. json and process file.zip
Hi, the black area in the image is caused by the different alignment method. In our alignment method, the final cropped face area is larger than your aligned 128x128 image contains, so it contains black area when you align it by our method. Using original multipie dataset to align can avoid this problem.
Is there a reason that the landmarks cannot just be uploaded rather than everyone re-generating it from Face++?
Hi, it's my fault. I have uploaded the landmarks and masks to facilitate the data processing, you can download them from GoogleDrive or BaiduNetDisk (l98p).
Hi, sorry to bother you again. My Multi-PIE dataset is incomplete. I don't know if you can share part of the dataset. If this is allowed, can you share the test set of Multi-PIE? Thank you very much, and I will look forward to your reply.
Sorry, according to the license, I can not share the dataset with you. You can also use your aligned dataset to prepare the landmarks
and masks
and train the ffwm with your dataset.
Hi, http Error 403: Forbidden is caused by frequent api requests, and our code will repeat the failed request until all images are processed. It will not impact the result.
Hi, I also encountered the same problem. It has always been 403forbidden. I would like to ask how you solved it and whether you need to spend money to purchase permissions.
Hi,
Face++ provides one free key for users and you don't need to spend money. 403 Forbidden is caused by frequent api requests, and our code will repeat the failed request until all images are processed. It may try many times to process one image successfully.
You can check if there are processed landmarks in your json_path
. If there is, you key
and skey
is right, just wait patiently. Or you can check your key
, skey
and request url
.
Hello, can you give me an example of a successful code? I have tried many times without success. Thank you very much.
Hi,
I have run the code. Here is my result:
and the extracted landmarks are saved in lm
folder.
Maybe you can check the http_utl in face_plus_plus.py
, or key and skey in get_landmarks.py
. After that, if you also have the same error, you can contact me by email, I can share my code with you.
Hello, my QQ number is 840549388.Can you add me?Thank you very much
There's an easy solution to just upload the result images? So everyone doesn't have to go through this arduous process and computers aren't wasted generating the results 1000s of times.
There's an easy solution to just upload the result images? So everyone doesn't have to go through this arduous process and computers aren't wasted generating the results 1000s of times.
Hi,
For MultiPIE dataset, I have uploaded the landmarks and masks to facilitate the data processing. And for LFW dataset, I have uploaded the aligned images. One can download them from GoogleDrive or BaiduNetDisk (l98p).
For customized dataset, you have to run the above steps to process your own data, it can not be skipped.
Hi, your work is very good! I encountered some problems in the data processing. I don't understand how to get the masks folder and the landmarks.npy file. Could you tell me more details about the data processing? And could you share your multipie dataset? Thank you very much, and I will look forward to your reply.