Open georgebooker13 opened 2 years ago
VOC数据集格式:
第一个文件夹images为所有的图像,也就是说,训练集、验证集和测试集需要自己划分;Annotations为JPEGImages文件夹中每个图片对应的标注,xml格式文件,文件名与对应图像相同;ImageSets主要的子文件夹为Main,其中有四个文本文件,为训练集、验证集、测试集和训练验证集的图片文件名;
使用labeling制作图片的标签文件(.xml):
1:Black_footed_Albatross 黑脚信天翁 2:crested auklet 冠毛小海雀 3:White_throated_Sparrow 4:Mockingbird 5:Northern_Flicker
使用split_train_val.py生成训练集train.txt , val.txt文件:
`import os import random import argparse
parser = argparse.ArgumentParser() parser.add_argument('--xml_path',default='Annotations', type=str, help='input xml label path') parser.add_argument('--txt_path', default='ImageSets', type=str, help='output txt label path') opt = parser.parse_args()
trainval_percent = 1.0 train_percent = 0.8 xmlfilepath = opt.xml_path txtsavepath = opt.txt_path total_xml = os.listdir(xmlfilepath) if not os.path.exists(txtsavepath): os.makedirs(txtsavepath)
num = len(total_xml) list_index = range(num) tv = int(num trainval_percent) tr = int(tv train_percent) trainval = random.sample(list_index, tv) train = random.sample(trainval, tr)
file_trainval = open(txtsavepath + '/trainval.txt', 'w') file_test = open(txtsavepath + '/test.txt', 'w') file_train = open(txtsavepath + '/train.txt', 'w') file_val = open(txtsavepath + '/val.txt', 'w')
for i in list_index: name = total_xml[i][:-4] + '\n' if i in trainval: file_trainval.write(name) if i in train: file_train.write(name) else: file_val.write(name) else: file_test.write(name)
file_trainval.close() file_train.close() file_val.close() file_test.close()`
使用voc_label.py:
每个xml标注提取bbox信息为txt格式(这种数据集格式成为yolo_txt格式),每个图像对应一个txt文件,文件每一行为一个目标的信息,包括类别 xmin xmax ymin ymax
。使用的脚本`voc_label.py
`# -- coding: utf-8 --
import xml.etree.ElementTree as ET import os from os import getcwd
sets = ['train', 'val', 'test'] classes = ['1', '2', '3', '4', '5'] abs_path = os.getcwd() print(abs_path)
def convert(size, box): dw = 1. / (size[0]) dh = 1. / (size[1]) x = (box[0] + box[1]) / 2.0 - 1 y = (box[2] + box[3]) / 2.0 - 1 w = box[1] - box[0] h = box[3] - box[2] x = x dw w = w dw y = y dh h = h dh return x, y, w, h
def convert_annotation(image_id): in_file = open('/envs/pytorch/yolov5-master-bird/paper_data/Annotations/%s.xml' % (image_id)) out_file = open('/envs/pytorch/yolov5-master-bird/paper_data/labels/%s.txt' % (image_id), 'w') tree = ET.parse(in_file) root = tree.getroot() size = root.find('size') w = int(size.find('width').text) h = int(size.find('height').text) for obj in root.iter('object'): difficult = obj.find('difficult').text cls = obj.find('name').text if cls not in classes or int(difficult) == 1: continue cls_id = classes.index(cls) xmlbox = obj.find('bndbox') b = (float(xmlbox.find('xmin').text), float(xmlbox.find('xmax').text), float(xmlbox.find('ymin').text), float(xmlbox.find('ymax').text)) b1, b2, b3, b4 = b
if b2 > w:
b2 = w
if b4 > h:
b4 = h
b = (b1, b2, b3, b4)
bb = convert((w, h), b)
out_file.write(str(cls_id) + " " + " ".join([str(a) for a in bb]) + '\n')
wd = getcwd() for image_set in sets: if not os.path.exists('D:/envs/pytorch/yolov5-master-bird/paper_data/labels/'): os.makedirs('D:/envs/pytorch/yolov5-master-bird/paper_data/labels/') image_ids = open('/envs/pytorch/yolov5-master-bird/paper_data/ImageSets/%s.txt' % (image_set)).read().strip().split() list_file = open('D:/envs/pytorch/yolov5-master-bird/paper_data/%s.txt' % (image_set), 'w') for image_id in image_ids: list_file.write(abs_path + '/envs/pytorch/yolov5-master-bird/paper_data/images/%s.jpg\n' % (image_id)) convert_annotation(image_id) list_file.close() `
配置文件:
新建一个bird.yaml
`train: /envs/pytorch/yolov5-master-bird/paper_data/images/ val: /envs/pytorch/yolov5-master-bird/paper_data/images/
nc: 5
names: ['1', '2','3','4','5'] `
制作自己的yolov5数据集