Open NguyenThaiHoc1 opened 4 years ago
First off, post issues with less of a demanding tone and with more information about what you're trying to do. For example, say if you are trying to run train.py. Also maybe post the stack trace to help people that try to resolve your issue. But since there's no other info here I'm going to assume you're trying to train with train.py on a custom dataset.
Back on topic. It seems like one of your annotations' class_id field is a higher number than the number of total classes in your dataset. Check your .names file in data/classes and see if it only has 77 lines in the file. If your dataset is* supposed to have 78 or more classes then there is an issue in your bounding boxes. Otherwise Is it possible that your bounding box class IDs are using a 1-based index (where the class IDs go from 1 to 77 instead of 0 to 76)?
In your annotation file, it should have the following format for training to work correctly:
/path/to/image_1.jpg bbox_1 bbox_2 ... bbox_n
/path/to/image_2.jpg bbox_1 bbox_2 ... bbox_n
.
.
.
/path/to/image_n.jpg bbox_1 bbox_2 ... bbox_n
where each bbox is in the format x1,y1,x2,y2,class_id
Your data/classes/whatever.names
file should contain all of the dataset's class names, one per line. The number of lines in this file will be the total number of classes used for training. So if you have 77 total classes then the class IDs will be [0, 1, 2, ..., 76] since this system uses a 0-based index.
@bryangreener How to generate this annotation file?
@bryangreener How to generate this annotation file?
That will depend entirely on the dataset that you're using. I typically write up a python script to handle the conversion for me. So if your input dataset is in VOC format, for example, you'd need to parse the XML annotation files (with a package like elementTree) and use some sort of conversion on the bounding box values to convert them to ints between the range of 0 and (width or height). Then just combine all the bounding boxes for each annotation file into a single line using string joining or something along those lines.
Here's a quick example of creating an annotation text file from a VOC formatted dataset. This won't actually run since a lot of it is pseudocode but it should give you the gist of what needs to be done. Of course, converting different dataset formats will require a different method but it shouldn't be too hard to do. This example assumes you already have a dataset with annotated images in VOC format.
def read_xml(xml_filepath):
bboxes = []
load xml from xml_filepath using elementTree or any other xml parsing package
img_filename = xml.find(filename) # find the filename tag in xml
# You need to add the folder where these image files will be stored to the img_filename.
for each object tag in xml:
class_id = object.find('name') # find the name tag in object element (class id)
bbox = object.find('bbox') # find the bbox tag in the object element
# bbox coords in xml can be float values so convert to float first. Then convert that to an int since that is the
# accepted dtype for this tensorfow implementation. Then convert to string since we need to join into string
# for writing to file.
xmin = str(int(float(bbox['xmin'])))
ymin = str(int(float(bbox['ymin'])))
xmax = str(int(float(bbox['xmax'])))
ymax = str(int(float(bbox['ymax'])))
# Join the items in a list with a comma as delimiter.
bboxes.append(','.join([xmin, ymin, xmax, ymax, class_id]))
# Combine the filename and bboxes list into single list then join all elements with a space.
# This creates a string which is a single line in the end annotation .txt.
annotation_line = ' '.join([img_filename] + bboxes)
return annotation_line
lines = []
for each xml_filepath in annotation folder:
lines.append(read_xml(xml_filepath))
with open('train.txt', 'w') as f:
f.write('\n'.join(lines)) # join all lines with newline character to write each line on its own line in file.
Otherwise if this doesn't answer your question, let me know.
I run class Dataset but i get an error index 77 is out of bounds for axis 1 with size 76 Please help me Thai Hoc