Open late347 opened 3 years ago
issue 1
$ cat *.xml | grep "
<name>invalid_face</name>
<name>masked_face</name>
<name>unmasked_face</name>
In addition to masked_face, unmasked_face and incorrectly_masked_face we also have invalid_face.
issue 2 The bounding box coordinates can be taken as an integer values.
issue 3 We apologize for that. We took the WIDERFACE dataset as it is.
issue 1. I'm using tensorflow object detection API and I receive an error with your dataset because of an unwanted class label in the data. error happens when using the tf records generating script as follows
python Tensorflow/scripts/generate_tfrecord.py -x Tensorflow/workspace/images/FMLD_cleaned_dataset/test -l Tensorflow/workspace/annotations/label_map.pbtxt -o Tensorflow/workspace/annotations/test.record -c Tensorflow/workspace/annotations/csv_train.csv
File "Tensorflow/scripts/generate_tfrecord.py", line 101, in class_text_to_int return label_map_dict[row_label] KeyError: 'invalid_face'
I used my own python script to make a 1->1 mapping from your provided "FMLD_annotations" folder with xml files as the labels, into the full dataset of combined WIDER and MAFA datasets containing all the images. The mapping was done on the basis of identifying the matching filenames of the xml files and the jpg images. As I dont have matlab on my own computer.
How many classes does the dataset contain, it seems to be more than these three, which I put into my labelmap.pbtxt in tensorflow with mask (name: masked_face), without mask (name: unmasked_face) and with mask worn incorrectly (name: incorrectly_masked_face).
issue 2. Does your dataset have bounding box coordinates in the precision of "even numbered floats" because I saw that the bounding box coordinates are given as float instead of the typical int, so that a banker's round shouldnt be necessary? many coordinates were given like919.0
issue 3. This was something that I noticed when inspecting the WIDERFACE dataset images. That you sourced some of your dataset from. There was some really obscene and bizarre category of "car accident" and "streetfight" images in the WIDERFACE dataset. I just wanted to make a note about it that those could be less than ethical images in terms of research. I deleted those categories of pictures from my dataset.