JACKYLUO1991 / Face-skin-hair-segmentaiton-and-skin-color-evaluation

segmentation and color classification
Apache License 2.0
178 stars 42 forks source link

data_loader #8

Closed Echosanmao closed 3 years ago

Echosanmao commented 4 years ago

hello~ Thanks for your reply to me. Another question I want to ask you is that you have said "Train three models on three different datasets", and the code is about CelebA datasets, so I want to know how you preprocess the LFW and Figaro1k datasets? Could you offer the related "data_loader" code? Thank you a lot!

Echosanmao commented 4 years ago

What I want to know is that is there any different when you process the three datasets? At present, I know that the masks of CelebA and LFW are different, so I want to get the detail about how you process them. In addition, I could not connect you by email, is the email address wrong?

JACKYLUO1991 commented 4 years ago

@Echosanmao I'm sorry I have been in a hurry to finish my graduation thesis recently and did not reply to you in time. CelebHair's dataset scale remains the same, and the dataset provider has modified it. The other two follow the top-bottom method, e.g. first extract the face frame, and then follow the standard selfie size (length and width are expanded by 0.3).

Echosanmao commented 4 years ago

OK. Thank you very much Could you tell me how you distribute the three datasets? LFW: 1500train, 500validation, 927test And how about CelebA and Figaro1k?

JACKYLUO1991 commented 4 years ago

For CelebHair, you can visit https://pan.baidu.com/s/1_bV2wnxnKV6lT7-QpYgYwQ password: xa5t; Since I am no longer in the internship study group, the data is gone. So for Figaro1k,I am very sorrry that could not provide it. However, you can use other datasets to replace it , e.g. CelebAMask-HQ.

JACKYLUO1991 commented 4 years ago

In addition, during the process of revising the paper, I reproduced the experiment and found that there was a problem with the accuracy measurement results. I have modified it in the paper. After the publication, I will give the latest results. Overall, it is okay to compare with lightweight networks in terms of accuracy.

Echosanmao commented 4 years ago

OK.Thank you for your advice! Today I am reading your code, in the metric.py, I find that the function definition of mean_accuracy is just the definition of recall, so I just couldn't understand that? Could you explain that to me? In addition, in the train.py, you have used the test data to validate the performance of the model in every epoch, does this method will cause the final test index to be a little high?

Echosanmao commented 4 years ago

A another question is I get the CelebHair dataset from the Baidu Web Disk,but there doesn't have the validation data, so I want to know how many validation data have you distributed?

JACKYLUO1991 commented 4 years ago

@Echosanmao The original data owner uses 20% of the data for validation, and the validation set is the test set. Maybe he did it because the amount of data was too small? I have asked him before by email.

Echosanmao commented 4 years ago

@Echosanmao The original data owner uses 20% of the data for validation, and the validation set is the test set. Maybe he did it because the amount of data was too small? I have asked him before by email.

Maybe~ So the question about the code?emmmm......

JACKYLUO1991 commented 4 years ago

@Echosanmao mean_accuracy is just the definition of recall?Sorry, i think recall = TP/(TP+FN), which is mainly used in binary segmentation.

Echosanmao commented 4 years ago

emmmm, Sorry, In my understanding,I think the nii (= TP), and the ti =ground truth = TP+FN in the definition of mean_accuracy, am I wrong?

JACKYLUO1991 commented 4 years ago

@ ti = sigma j (nij) , is the total number of pixels of class i. You can refer to the paper "Fully Convolutional Networks for Semantic Segmentation" for these metrics.

Echosanmao commented 4 years ago

OKK.Thank you!!!

JACKYLUO1991 commented 4 years ago

@Echosanmao Thanks you support bro.