ultralytics / xview-yolov3

xView 2018 Object Detection Challenge: YOLOv3 Training and Inference.
https://docs.ultralytics.com
GNU Affero General Public License v3.0
256 stars 57 forks source link

YOLOv5 Now Open-Sourced 🚀 #22

Open glenn-jocher opened 4 years ago

glenn-jocher commented 4 years ago

👋 Hello! Thanks for visiting! Ultralytics has open-sourced YOLOv5 🚀 at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.

&nbsp

YOLOv5-P5 640 Figure (click to expand)

Figure Notes (click to expand) * GPU Speed measures end-to-end time per image averaged over 5000 COCO val2017 images using a V100 GPU with batch size 32, and includes image preprocessing, PyTorch FP16 inference, postprocessing and NMS. * EfficientDet data from [google/automl](https://github.com/google/automl) at batch size 8. * **Reproduce** by `python test.py --task study --data coco.yaml --iou 0.7 --weights yolov5s6.pt yolov5m6.pt yolov5l6.pt yolov5x6.pt`

Pretrained Checkpoints

Model size
(pixels)
mAPval
0.5:0.95
mAPtest
0.5:0.95
mAPval
0.5
Speed
V100 (ms)
params
(M)
FLOPS
640 (B)
YOLOv5s 640 36.7 36.7 55.4 2.0 7.3 17.0
YOLOv5m 640 44.5 44.5 63.1 2.7 21.4 51.3
YOLOv5l 640 48.2 48.2 66.9 3.8 47.0 115.4
YOLOv5x 640 50.4 50.4 68.8 6.1 87.7 218.8
YOLOv5s6 1280 43.3 43.3 61.9 4.3 12.7 17.4
YOLOv5m6 1280 50.5 50.5 68.7 8.4 35.9 52.4
YOLOv5l6 1280 53.4 53.4 71.1 12.3 77.2 117.7
YOLOv5x6 1280 54.4 54.4 72.0 22.4 141.8 222.9
YOLOv5x6 TTA 1280 55.0 55.0 72.0 70.8 - -
Table Notes (click to expand) * APtest denotes COCO [test-dev2017](http://cocodataset.org/#upload) server results, all other AP results denote val2017 accuracy. * AP values are for single-model single-scale unless otherwise noted. **Reproduce mAP** by `python test.py --data coco.yaml --img 640 --conf 0.001 --iou 0.65` * SpeedGPU averaged over 5000 COCO val2017 images using a GCP [n1-standard-16](https://cloud.google.com/compute/docs/machine-types#n1_standard_machine_types) V100 instance, and includes FP16 inference, postprocessing and NMS. **Reproduce speed** by `python test.py --data coco.yaml --img 640 --conf 0.25 --iou 0.45` * All checkpoints are trained to 300 epochs with default settings and hyperparameters (no autoaugmentation). * Test Time Augmentation ([TTA](https://github.com/ultralytics/yolov5/issues/303)) includes reflection and scale augmentation. **Reproduce TTA** by `python test.py --data coco.yaml --img 1536 --iou 0.7 --augment`

For more information and to get started with YOLOv5 🚀 please visit https://github.com/ultralytics/yolov5. Thank you!

sramirez commented 4 years ago

Hi! First of all, congratulations for your work. Any plan on releasing pre-trained weights for YOLOv5 with xView?

glenn-jocher commented 4 years ago

@sramirez no, but you are free to train YOLOv5 on xView yourself :) See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

bartekrdz commented 3 years ago

@sramirez no, but you are free to train YOLOv5 on xView yourself :) See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

Is there a way to convert xView GeoJSON annotation file to YOLO format?

glenn-jocher commented 3 years ago

@bartekrdz yes of course. You'd probably want to write your own conversion script and then use YOLOv5 to get started. The only thing missing from YOLOv5 that's used here is a sliding window inference system to run very high res images at native resolution on smaller graphics cards, and a corresponding chip dataloader to train chips at native resolution. The YOLO label format is pretty simple, it's described in https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

pounde commented 2 years ago

Hello, I'm curious if anyone had trained an xView model on Yolo? I may go down that path if it hasn't been accomplished yet.

glenn-jocher commented 2 years ago

@pounde we've made it super to train YOLOv5 on xView. Instructions are in xView.yaml in the YOLOv5 repo. First download dataset zips as indicated and then run python train.py --data xView.yaml.

https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
# --------  DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command!  --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
#     └── xView  ← downloads here
pounde commented 2 years ago

Perfect, thank you. I just wanted to be sure no one had accomplished it before I set down that path. Thanks for all the hard work.

QuentinAndre11 commented 2 years ago

Hello! I was wondering if someone (like @pounde for example) had reached good results for xView dataset or done any kind of hyperparameters optimization. I'm actually looking for pretrained weights to use to a more specific project on aerial images and I am wondering if I could use transfer learning or if I should train for xView at first.

glenn-jocher commented 2 years ago

@QuentinAndre11 xView is available on YOLOv5 now, I'd recommend just training it directly there:

python train.py --data xView.yaml

Follow directions in yaml first to download: https://github.com/ultralytics/yolov5/blob/7cef03dddd6fba26fff6748ed1cfdd18208c193e/data/xView.yaml#L1-L9

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
# --------  DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command!  --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
#     └── xView  ← downloads here (20.7 GB)
QuentinAndre11 commented 2 years ago

@glenn-jocher Yes I followed it and used the script (I cannot login the xview website tho so I used kaggle to download the data) but I reach a MAP@0.5 score of 0.026 after 300 epochs, so I was wondering if the default settings were not really accurate here... I have 847-127 for train-val split so I guess it's the same as the original dataset.

glenn-jocher commented 2 years ago

@QuentinAndre11 👋 Hello! Thanks for asking about improving YOLOv5 🚀 training results.

Most of the time good results can be obtained with no changes to the models or training settings, provided your dataset is sufficiently large and well labelled. If at first you don't get good results, there are steps you might be able to take to improve, but we always recommend users first train with all default settings before considering any changes. This helps establish a performance baseline and spot areas for improvement.

If you have questions about your training results we recommend you provide the maximum amount of information possible if you expect a helpful response, including results plots (train losses, val losses, P, R, mAP), PR curve, confusion matrix, training mosaics, test results and dataset statistics images such as labels.png. All of these are located in your project/name directory, typically yolov5/runs/train/exp.

We've put together a full guide for users looking to get the best results on their YOLOv5 trainings below.

Dataset

COCO Analysis

Model Selection

Larger models like YOLOv5x and YOLOv5x6 will produce better results in nearly all cases, but have more parameters, require more CUDA memory to train, and are slower to run. For mobile deployments we recommend YOLOv5s/m, for cloud deployments we recommend YOLOv5l/x. See our README table for a full comparison of all models.

YOLOv5 Models

Training Settings

Before modifying anything, first train with default settings to establish a performance baseline. A full list of train.py settings can be found in the train.py argparser.

Further Reading

If you'd like to know more a good place to start is Karpathy's 'Recipe for Training Neural Networks', which has great ideas for training that apply broadly across all ML domains: http://karpathy.github.io/2019/04/25/recipe/

Good luck 🍀 and let us know if you have any other questions!

pounde commented 2 years ago

@QuentinAndre11 I have not set down the path of training xView on YOLO. The weights are available from the DIU S3 bucket. You can also take a look at the repo here for an implementation that may fit your needs.

ShaashvatShetty commented 2 years ago

Hello, I have been trying to implement an xView dataset using yolov5 and I followed the instructions. But I keep getting an error where it cannot find the labels. It seems to be able to find the images though. Any ideas?

glenn-jocher commented 2 years ago

@ShaashvatShetty I'd recommend going to the YOLOv5 repo as we have an xView.yaml all set up to start training with instructions on dataset download: https://github.com/ultralytics/yolov5

tanya-suri commented 2 years ago

@pounde we've made it super to train YOLOv5 on xView. Instructions are in xView.yaml in the YOLOv5 repo. First download dataset zips as indicated and then run python train.py --data xView.yaml.

https://github.com/ultralytics/yolov5/blob/master/data/xView.yaml

# YOLOv5 🚀 by Ultralytics, GPL-3.0 license
# DIUx xView 2018 Challenge https://challenge.xviewdataset.org by U.S. National Geospatial-Intelligence Agency (NGA)
# --------  DOWNLOAD DATA MANUALLY and jar xf val_images.zip to 'datasets/xView' before running train command!  --------
# Example usage: python train.py --data xView.yaml
# parent
# ├── yolov5
# └── datasets
#     └── xView  ← downloads here

Can you please share the dataset file in utils folder as well.

glenn-jocher commented 2 years ago

@tanya-suri I don't quite understand your question, but perhaps you are asking about utils/datasets.py. This file has been renamed to utils/dataloaders.py recently in YOLOv5.

ShaashvatShetty commented 2 years ago

@glenn-jocher I followed the yolov5 repo I modified the xview.yaml as shown below but keep getting this error: AssertionError: train: No labels in /content/drive/MyDrive/datasets/labels/train.cache. Can not train without labels. See https://docs.ultralytics.com/yolov5/tutorials/train_custom_data

`path: ../datasets/xView # dataset root dir train: /content/drive/MyDrive/datasets/images/train # train images (relative to 'path') 90% of 847 train images val: /content/drive/MyDrive/datasets/images/val # train images (relative to 'path') 10% of 847 train images

Classes

nc: 60 # number of classes names: ['Fixed-wing Aircraft', 'Small Aircraft', 'Cargo Plane', 'Helicopter', 'Passenger Vehicle', 'Small Car', 'Bus', 'Pickup Truck', 'Utility Truck', 'Truck', 'Cargo Truck', 'Truck w/Box', 'Truck Tractor', 'Trailer', 'Truck w/Flatbed', 'Truck w/Liquid', 'Crane Truck', 'Railway Vehicle', 'Passenger Car', 'Cargo Car', 'Flat Car', 'Tank car', 'Locomotive', 'Maritime Vessel', 'Motorboat', 'Sailboat', 'Tugboat', 'Barge', 'Fishing Vessel', 'Ferry', 'Yacht', 'Container Ship', 'Oil Tanker', 'Engineering Vehicle', 'Tower crane', 'Container Crane', 'Reach Stacker', 'Straddle Carrier', 'Mobile Crane', 'Dump Truck', 'Haul Truck', 'Scraper/Tractor', 'Front loader/Bulldozer', 'Excavator', 'Cement Mixer', 'Ground Grader', 'Hut/Tent', 'Shed', 'Building', 'Aircraft Hangar', 'Damaged Building', 'Facility', 'Construction Site', 'Vehicle Lot', 'Helipad', 'Storage Tank', 'Shipping container lot', 'Shipping Container', 'Pylon', 'Tower'] # class names

Download script/URL (optional) ---------------------------------------------------------------------------------------

download: | import json import os from pathlib import Path import numpy as np from PIL import Image from tqdm import tqdm from utils.datasets import autosplit from utils.general import download, xyxy2xywhn def convert_labels(fname=Path('xView/xView_train.geojson')):

Convert xView geoJSON labels to YOLO format

  path = fname.parent
  with open(fname) as f:
      print(f'Loading {fname}...')
      data = json.load(f)
  # Make dirs
  labels = Path(path / 'labels' / 'train')
  os.system(f'rm -rf {labels}')
  labels.mkdir(parents=True, exist_ok=True)
  # xView classes 11-94 to 0-59
  xview_class2index = [-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 0, 1, 2, -1, 3, -1, 4, 5, 6, 7, 8, -1, 9, 10, 11,
                       12, 13, 14, 15, -1, -1, 16, 17, 18, 19, 20, 21, 22, -1, 23, 24, 25, -1, 26, 27, -1, 28, -1,
                       29, 30, 31, 32, 33, 34, 35, 36, 37, -1, 38, 39, 40, 41, 42, 43, 44, 45, -1, -1, -1, -1, 46,
                       47, 48, 49, -1, 50, 51, -1, 52, -1, -1, -1, 53, 54, -1, 55, -1, -1, 56, -1, 57, -1, 58, 59]
  shapes = {}
  for feature in tqdm(data['features'], desc=f'Converting {fname}'):
      p = feature['properties']
      if p['bounds_imcoords']:
          id = p['image_id']
          file = path / 'train_images' / id
          if file.exists():  # 1395.tif missing
              try:
                  box = np.array([int(num) for num in p['bounds_imcoords'].split(",")])
                  assert box.shape[0] == 4, f'incorrect box shape {box.shape[0]}'
                  cls = p['type_id']
                  cls = xview_class2index[int(cls)]  # xView class to 0-60
                  assert 59 >= cls >= 0, f'incorrect class index {cls}'
                  # Write YOLO label
                  if id not in shapes:
                      shapes[id] = Image.open(file).size
                  box = xyxy2xywhn(box[None].astype(np.float), w=shapes[id][0], h=shapes[id][1], clip=True)
                  with open((labels / id).with_suffix('.txt'), 'a') as f:
                      f.write(f"{cls} {' '.join(f'{x:.6f}' for x in box[0])}\n")  # write label.txt
              except Exception as e:
                  print(f'WARNING: skipping one label for {file}: {e}')

Download manually from https://challenge.xviewdataset.org

dir = Path(yaml['/content/drive/MyDrive/datasets']) # dataset root dir urls = ['/content/drive/MyDrive/datasets/labels/train_labels.zip', # train labels 'https://d307kc0mrhucc3.cloudfront.net/train_images.zip', # 15G, 847 train images '/content/drive/MyDrive/datasets/images/val'] # 5G, 282 val images (no labels) download(urls, dir=dir, delete=False)

Convert labels

convert_labels(dir / 'xView_train.geojson')

Move images

images = Path(dir / 'images') images.mkdir(parents=True, exist_ok=True) Path(dir / 'train_images').rename(dir / 'images' / 'train') Path(dir / 'val_images').rename(dir / 'images' / 'val')

Split

autosplit(dir / 'images' / 'train')`

Godofnothing commented 2 years ago

@ShaashvatShetty

you should have the following structure in your xView directory:

# parent
# xView
#    └──  train_images
#    └──  val_images (may be empty directory)
#    └──  xView_train.geojson
Godofnothing commented 2 years ago

@QuentinAndre11 300 epochs may not suffice, since the dataset is quite small in terms of number of images, but the images themselves contain many instances. I think, one should train withimage_size at least 1280 or even higher to obtain good results.