ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
47.44k stars 15.66k forks source link

Train Custom Data Tutorial ⭐ #12

Open glenn-jocher opened 3 years ago

glenn-jocher commented 3 years ago

📚 This guide explains how to train your own custom dataset with YOLOv5 🚀. See YOLOv5 Docs for additional details. UPDATED 13 April 2023.

Before You Start

Clone repo and install requirements.txt in a Python>=3.7.0 environment, including PyTorch>=1.7. Models and datasets download automatically from the latest YOLOv5 release.

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Train On Custom Data



Creating a custom model to detect your objects is an iterative process of collecting and organizing images, labeling your objects of interest, training a model, deploying it into the wild to make predictions, and then using that deployed model to collect examples of edge cases to repeat and improve.

1. Create Dataset

YOLOv5 models must be trained on labelled data in order to learn classes of objects in that data. There are two options for creating your dataset before you start training:

Use Roboflow to create your dataset in YOLO format ⭐ ### 1.1 Collect Images Your model will learn by example. Training on images similar to the ones it will see in the wild is of the utmost importance. Ideally, you will collect a wide variety of images from the same configuration (camera, angle, lighting, etc.) as you will ultimately deploy your project. If this is not possible, you can start from [a public dataset](https://universe.roboflow.com/?ref=ultralytics) to train your initial model and then [sample images from the wild during inference](https://blog.roboflow.com/computer-vision-active-learning-tips/?ref=ultralytics) to improve your dataset and model iteratively. ### 1.2 Create Labels Once you have collected images, you will need to annotate the objects of interest to create a ground truth for your model to learn from.

[Roboflow Annotate](https://roboflow.com/annotate?ref=ultralytics) is a simple web-based tool for managing and labeling your images with your team and exporting them in [YOLOv5's annotation format](https://roboflow.com/formats/yolov5-pytorch-txt?ref=ultralytics). ### 1.3 Prepare Dataset for YOLOv5 Whether you [label your images with Roboflow](https://roboflow.com/annotate?ref=ultralytics) or not, you can use it to convert your dataset into YOLO format, create a YOLOv5 YAML configuration file, and host it for importing into your training script. [Create a free Roboflow account](https://app.roboflow.com/?model=yolov5&ref=ultralytics) and upload your dataset to a `Public` workspace, label any unannotated images, then generate and export a version of your dataset in `YOLOv5 Pytorch` format. Note: YOLOv5 does online augmentation during training, so we do not recommend applying any augmentation steps in Roboflow for training with YOLOv5. But we recommend applying the following preprocessing steps:

* **Auto-Orient** - to strip EXIF orientation from your images. * **Resize (Stretch)** - to the square input size of your model (640x640 is the YOLOv5 default). Generating a version will give you a point in time snapshot of your dataset so you can always go back and compare your future model training runs against it, even if you add more images or change its configuration later.

Export in `YOLOv5 Pytorch` format, then copy the snippet into your training script or notebook to download your dataset.

Now continue with `2. Select a Model`.
Or manually prepare your dataset ### 1.1 Create dataset.yaml [COCO128](https://www.kaggle.com/ultralytics/coco128) is an example small tutorial dataset composed of the first 128 images in [COCO](http://cocodataset.org/#home) train2017. These same 128 images are used for both training and validation to verify our training pipeline is capable of overfitting. [data/coco128.yaml](https://github.com/ultralytics/yolov5/blob/master/data/coco128.yaml), shown below, is the dataset config file that defines 1) the dataset root directory `path` and relative paths to `train` / `val` / `test` image directories (or *.txt files with image paths) and 2) a class `names` dictionary: ```yaml # Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..] path: ../datasets/coco128 # dataset root dir train: images/train2017 # train images (relative to 'path') 128 images val: images/train2017 # val images (relative to 'path') 128 images test: # test images (optional) # Classes (80 COCO classes) names: 0: person 1: bicycle 2: car ... 77: teddy bear 78: hair drier 79: toothbrush ``` ### 1.2 Create Labels After using an annotation tool to label your images, export your labels to **YOLO format**, with one `*.txt` file per image (if no objects in image, no `*.txt` file is required). The `*.txt` file specifications are: - One row per object - Each row is `class x_center y_center width height` format. - Box coordinates must be in **normalized xywh** format (from 0 - 1). If your boxes are in pixels, divide `x_center` and `width` by image width, and `y_center` and `height` by image height. - Class numbers are zero-indexed (start from 0).

The label file corresponding to the above image contains 2 persons (class `0`) and a tie (class `27`):

### 1.3 Organize Directories Organize your train and val images and labels according to the example below. YOLOv5 assumes `/coco128` is inside a `/datasets` directory **next to** the `/yolov5` directory. **YOLOv5 locates labels automatically for each image** by replacing the last instance of `/images/` in each image path with `/labels/`. For example: ```bash ../datasets/coco128/images/im0.jpg # image ../datasets/coco128/labels/im0.txt # label ```

2. Select a Model

Select a pretrained model to start training from. Here we select YOLOv5s, the second-smallest and fastest model available. See our README table for a full comparison of all models.

YOLOv5 Models

3. Train

Train a YOLOv5s model on COCO128 by specifying dataset, batch-size, image size and either pretrained --weights yolov5s.pt (recommended), or randomly initialized --weights '' --cfg yolov5s.yaml (not recommended). Pretrained weights are auto-downloaded from the latest YOLOv5 release.

# Train YOLOv5s on COCO128 for 3 epochs
$ python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt

💡 ProTip: Add --cache ram or --cache disk to speed up training (requires significant RAM/disk resources).
💡 ProTip: Always train from a local dataset. Mounted or network drives like Google Drive will be very slow.

All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc. For more details see the Training section of our tutorial notebook. Open In Colab Open In Kaggle

4. Visualize

Comet Logging and Visualization 🌟 NEW

Comet is now fully integrated with YOLOv5. Track and visualize model metrics in real time, save your hyperparameters, datasets, and model checkpoints, and visualize your model predictions with Comet Custom Panels! Comet makes sure you never lose track of your work and makes it easy to share results and collaborate across teams of all sizes!

Getting started is easy:

pip install comet_ml  # 1. install
export COMET_API_KEY=<Your API Key>  # 2. paste API key
python train.py --img 640 --epochs 3 --data coco128.yaml --weights yolov5s.pt  # 3. train

To learn more about all of the supported Comet features for this integration, check out the Comet Tutorial. If you'd like to learn more about Comet, head over to our documentation. Get started by trying out the Comet Colab Notebook: Open In Colab

yolo-ui

ClearML Logging and Automation 🌟 NEW

ClearML is completely integrated into YOLOv5 to track your experimentation, manage dataset versions and even remotely execute training runs. To enable ClearML:

You'll get all the great expected features from an experiment manager: live updates, model upload, experiment comparison etc. but ClearML also tracks uncommitted changes and installed packages for example. Thanks to that ClearML Tasks (which is what we call experiments) are also reproducible on different machines! With only 1 extra line, we can schedule a YOLOv5 training task on a queue to be executed by any number of ClearML Agents (workers).

You can use ClearML Data to version your dataset and then pass it to YOLOv5 simply using its unique ID. This will help you keep track of your data without adding extra hassle. Explore the ClearML Tutorial for details!

ClearML Experiment Management UI

Local Logging

Training results are automatically logged with Tensorboard and CSV loggers to runs/train, with a new experiment directory created for each new training as runs/train/exp2, runs/train/exp3, etc.

This directory contains train and val statistics, mosaics, labels, predictions and augmentated mosaics, as well as metrics and charts including precision-recall (PR) curves and confusion matrices.

Local logging results

Results file results.csv is updated after each epoch, and then plotted as results.png (below) after training completes. You can also plot any results.csv file manually:

from utils.plots import plot_results
plot_results('path/to/results.csv')  # plot 'results.csv' as 'results.png'

results.png

Next Steps

Once your model is trained you can use your best checkpoint best.pt to:

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

shenglih commented 3 years ago

I used a my_training.txt file that includes a list of training images instead of a path to the folder of images and annotations, but it always returns AssertionError: No images found in /path_to_my_txt_file/my_training.txt. Could anyone kindly give some pointers to where it went wrong? Thanks

zuoxiang95 commented 3 years ago

I get the same error @shenglih

synked16 commented 3 years ago

hey @glenn-jocher can i train a model on images with size 450x600??

glenn-jocher commented 3 years ago

@justAyaan sure, just use --img 600, it will automatically use the nearest correct stride multiple.

synked16 commented 3 years ago

@glenn-jocher ok.. will try.. just one doubt.. is the value of xcenter = (x + w)/ 2 OR is it x + w/2??

glenn-jocher commented 3 years ago

x_center is the center of your object in the x dimension

clxa commented 3 years ago

I used Yolov5 training on Kaggel kernel, and this error occurred. I am a novice. How can I solve this problem

Apex recommended for faster mixed precision training: https://github.com/NVIDIA/apex Using CUDA device0 _CudaDeviceProperties(name='Tesla P100-PCIE-16GB', total_memory=16280MB)

Namespace(batch_size=4, bucket='', cache_images=False, cfg='/kaggle/input/yolov5aconfig/yolov5x.yaml', data='/kaggle/input/yolov5aconfig/wheat0.yaml', device='', epochs=15, evolve=False, hyp='', img_size=[1024, 1024], local_rank=-1, multi_scale=False, name='yolov5x_fold0', noautoanchor=False, nosave=False, notest=False, rect=False, resume=False, single_cls=False, sync_bn=False, total_batch_size=4, weights='', world_size=1) Start Tensorboard with "tensorboard --logdir=runs", view at http://localhost:6006/ 2020-08-01 04:17:02.964977: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1 Hyperparameters {'optimizer': 'SGD', 'lr0': 0.01, 'momentum': 0.937, 'weight_decay': 0.0005, 'giou': 0.05, 'cls': 0.5, 'cls_pw': 1.0, 'obj': 1.0, 'obj_pw': 1.0, 'iou_t': 0.2, 'anchor_t': 4.0, 'fl_gamma': 0.0, 'hsv_h': 0.015, 'hsv_s': 0.7, 'hsv_v': 0.4, 'degrees': 0.0, 'translate': 0.0, 'scale': 0.5, 'shear': 0.0}

             from  n    params  module                                  arguments                     

0 -1 1 8800 models.common.Focus [3, 80, 3]
1 -1 1 115520 models.common.Conv [80, 160, 3, 2]
2 -1 1 315680 models.common.BottleneckCSP [160, 160, 4]
3 -1 1 461440 models.common.Conv [160, 320, 3, 2]
4 -1 1 3311680 models.common.BottleneckCSP [320, 320, 12]
5 -1 1 1844480 models.common.Conv [320, 640, 3, 2]
6 -1 1 13228160 models.common.BottleneckCSP [640, 640, 12]
7 -1 1 7375360 models.common.Conv [640, 1280, 3, 2]
8 -1 1 4099840 models.common.SPP [1280, 1280, [5, 9, 13]]
9 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
10 -1 1 820480 models.common.Conv [1280, 640, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 5435520 models.common.BottleneckCSP [1280, 640, 4, False]
14 -1 1 205440 models.common.Conv [640, 320, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 1360960 models.common.BottleneckCSP [640, 320, 4, False]
18 -1 1 5778 torch.nn.modules.conv.Conv2d [320, 18, 1, 1]
19 -2 1 922240 models.common.Conv [320, 320, 3, 2]
20 [-1, 14] 1 0 models.common.Concat [1]
21 -1 1 5025920 models.common.BottleneckCSP [640, 640, 4, False]
22 -1 1 11538 torch.nn.modules.conv.Conv2d [640, 18, 1, 1]
23 -2 1 3687680 models.common.Conv [640, 640, 3, 2]
24 [-1, 10] 1 0 models.common.Concat [1]
25 -1 1 20087040 models.common.BottleneckCSP [1280, 1280, 4, False]
26 -1 1 23058 torch.nn.modules.conv.Conv2d [1280, 18, 1, 1]
27 [] 1 0 models.yolo.Detect [1, [[116, 90, 156, 198, 373, 326], [30, 61, 62, 45, 59, 119], [10, 13, 16, 30, 33, 23]], []] Traceback (most recent call last): File "/kaggle/input/yolov5/yolov5-master/train.py", line 469, in train(hyp, tb_writer, opt, device) File "/kaggle/input/yolov5/yolov5-master/train.py", line 80, in train model = Model(opt.cfg, nc=nc).to(device) File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 70, in init m.stride = torch.tensor([s / x.shape[-2] for x in self.forward(torch.zeros(1, ch, s, s))]) # forward File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 100, in forward return self.forward_once(x, profile) # single-scale inference, train File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 120, in forward_once x = m(x) # run File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in call result = self.forward(*input, **kwargs) File "/kaggle/input/yolov5/yolov5-master/models/yolo.py", line 27, in forward x[i] = self.mi # conv File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 147, in getitem return self._modules[self._get_abs_string_index(idx)] File "/opt/conda/lib/python3.7/site-packages/torch/nn/modules/container.py", line 137, in _get_abs_string_index raise IndexError('index {} is out of range'.format(idx)) IndexError: index 0 is out of range

glenn-jocher commented 3 years ago

This is old code. Suggest you git clone the latest. PyTorch 1.6 is a requirement now.

twangnh commented 3 years ago

@glenn-jocher yolov5 is really flexible for training on custom datasets! could you please share the script for processing and converting coco to the train2017.txt file? I already have this file but would like to regenerate it with some modifications.

rwin94 commented 3 years ago

Hi, I have a newbie question... Could you explain what the difference between providing pretrained weights vs not? when to use?

Should I expect better results if I use yolov5s.pt as pretrained weights? Thanks

glenn-jocher commented 3 years ago

@twangnh train2017.txt is just a textfile with a list of images. You can use the glob package to create this, but it's not necessary, as YOLOv5 data yaml's will also accept a simple directory of training images. You can see this format in the coco128 dataset: https://github.com/ultralytics/yolov5/blob/728efa6576eae595ddcfdd8b75ab5da40ddfcaf4/data/coco128.yaml#L1-L13

glenn-jocher commented 3 years ago

@rwin94 for small datasets or for quick results yes always start from the pretrained weights:

python train.py --cfg yolov5s.yaml --weights yolov5s.pt

You can see a comparison of pretrained vs from scratch in the custom data training turorial: https://docs.ultralytics.com/yolov5/tutorials/train_custom_data#6-visualize

Lg955 commented 3 years ago

I want to ask a question: it showes File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png when training in Google colab.

But everything is ok in my own laptop, so it cannot train with .png?

liumingjune commented 3 years ago

Hello, I'd like to ask you something. "If no objects in image.no *. txt file is required". For images without labels in the training data (there is no target inthe image), how does it participate in the training as a negative sample?

glenn-jocher commented 3 years ago

@liumingjune all images are treated equally during training, irrespective of the labels they may or may not have.

liumingjune commented 3 years ago

Thanks your reply. I have a question after looking at the code. I would like to ask if you only keep the latest and best models which under runs file when you save the training model? Can't you save the model under the specified epoch? Is it ok to choose the latest model directly after the training? How to select other models under the epoch if this optimal detection does not work well?

glenn-jocher commented 3 years ago

@liumingjune best.pt and last.pt are saved, which are the best performing model across all epochs and the most recent epoch's model. You can customize checkpointing logic here: https://github.com/ultralytics/yolov5/blob/9ae868364a2d98bd03cecb8ba8f6310c0d11b482/train.py#L333-L348

liumingjune commented 3 years ago

Hello, thank you for your reply. I want to know if Yolov5 has done any work on small target detection.

glenn-jocher commented 3 years ago

@liumingjune yes, of course. All models will work well with small objects without any changes. My only recommendation is to train at the largest --img-size you can, up to native resolution.

We do have custom small object models with extra ops targeted to higher resolution layers (P2 and P3) which we have not open sourced. These custom models outperform our official models on COCO, in particular in the small object class, but also come with a speed penalty, so we decided not to release them to the community at the present time.

liumingjune commented 3 years ago

@liumingjune all images are treated equally during training, irrespective of the labels they may or may not have.

Hello, I have a question. My test results showed a high false alarm rate. If I want to add some large images with no target and only background as negative samples to participate in the training. Obviously, in the format you requested, these images are not labled with *.txt. I have lots of big images with no targets here. so do you have any suggestions for the number of large images with no targets? I am concerned that the imbalance in the number of positive and negative samples will affect the effectiveness of training.Wish your reply.Thanks!

liumingjune commented 3 years ago

Hello, I have a question. My test results showed a high false alarm rate. If I want to add some large images with no target and only background as negative samples to participate in the training. Obviously, in the format you requested, these images are not labled with *.txt. I have lots of big images with no targets here. so do you have any suggestions for the number of large images with no targets? I am concerned that the imbalance in the number of positive and negative samples will affect the effectiveness of training.Wish your reply.Thanks!

glenn-jocher commented 3 years ago

@liumingjune I can't advise you on custom dataset training.

liumingjune commented 3 years ago

@liumingjune I can't advise you on custom dataset training.

Thanks. I understand. I just want to find out if the ratio of positive and negative samples(number of image has label and has no label) has any effect on training.

TianFuKang commented 3 years ago

I want to ask a question: it showes File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png when training in Google colab.

But everything is ok in my own laptop, so it cannot train with .png?

delete *.cache file, again run python3 train.py

1311440131 commented 3 years ago

I want to ask a question: it showes File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png when training in Google colab. But everything is ok in my own laptop, so it cannot train with .png?

delete *.cache file, again run python3 train.py

Thanks!!

PavanproJack commented 3 years ago

Hi, I am a very beginner in Yolo v5 but I learned it quickly to detect mangoes from digital images with appreciable accuracy. Many thanks to @glenn-jocher for the great contribution.

I am stuck here to find out how to plot the mAP & Loss vs the number of iterations curve just as in Yolov4. Also after training the model any possibility to get the False positives and False Negatives count?

Thanks in advance for the support.

Regards Pavan

liumingjune commented 3 years ago

Hello, due to the high false alarm rate before, we added negative samples without labels to the training, but the increase in the number makes the training slow. So I would like to ask how does a negative sample without a label participate in training, and does it help with training? I am anxious and looking forward to your reply.

---Original--- From: "Glenn Jocher"<notifications@github.com> Date: Fri, Aug 7, 2020 00:29 AM To: "ultralytics/yolov5"<yolov5@noreply.github.com>; Cc: "Mention"<mention@noreply.github.com>;"liumingjune"<1345972210@qq.com>; Subject: Re: [ultralytics/yolov5] Train Custom Data Tutorial (#12)

@liumingjune all images are treated equally during training, irrespective of the labels they may or may not have.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

glenn-jocher commented 3 years ago

@liumingjune as I said before all images are treated equally during training, irrespective of the labels they may or may not have.

liumingjune commented 3 years ago

I understand. I want to know whether it is helpful for training and whether it will participate in the calculation of loss. I add negative samples to make the training slow.

---Original--- From: "Glenn Jocher"<notifications@github.com> Date: Wed, Sep 2, 2020 09:20 AM To: "ultralytics/yolov5"<yolov5@noreply.github.com>; Cc: "Mention"<mention@noreply.github.com>;"liumingjune"<1345972210@qq.com>; Subject: Re: [ultralytics/yolov5] Train Custom Data Tutorial (#12)

@liumingjune as I said before all images are treated equally during training, irrespective of the labels they may or may not have.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.

glenn-jocher commented 3 years ago

@liumingjune yes of course, every image participates in loss computation equally. Objectness loss is evaluated for every point in the output grid, giou and cls loss are evaluated for positive object labels.

lyuweiwang commented 3 years ago

I want to ask a question: it showes File "/content/yolov5/utils/datasets.py", line 344, in <listcomp> labels, shapes = zip(*[cache[x] for x in self.img_files]) KeyError: '../dota_data/images/val/P0800__1__552___0.png when training in Google colab.

But everything is ok in my own laptop, so it cannot train with .png?

same problem. Have you solved it?

glenn-jocher commented 3 years ago

@lyuweiwang @lyuweiwang all of the most common formats are supported for training (images) and inference (images and videos): https://github.com/ultralytics/yolov5/blob/ffe9eb42389038972d47eecd44c0f0dc9f2cf033/utils/datasets.py#L20-L22

synked16 commented 3 years ago

@glenn-jocher can we train yolov5 on a dataset which has varying image sizes? Ex:- Img 1 - 256*400 Img 2 - 300*300

glenn-jocher commented 3 years ago

@justAyaan suggest you run the tutorial and observe the coco128 dataset.

Alex-afka commented 3 years ago

Hello, I have a question. I want to use hyperparameters in yolov5.but i don't know how to use it . i want to use mixup in training my data. how should i set the mixup

Thanks in advance for the support.

glenn-jocher commented 3 years ago

@Alex-afka two hyp files are available in data/. To use mixup for example, set the mixup probability 1 > mixup > 0 in your hyp file: https://github.com/ultralytics/yolov5/blob/c8e51812a527eef8ad34fd3530b4942ad156b71e/data/hyp.scratch.yaml#L29

synked16 commented 3 years ago

@glenn-jocher I meant this:-

can we train yolov5 on a dataset which has varying image sizes? Ex:- Img 1 - 256*400 Img 2 - 300*300

glenn-jocher commented 3 years ago

@justAyaan yes, you can train on datasets with any image sizes.

synked16 commented 3 years ago

@glenn-jocher so what is going to be the specified --img-size for this kind of a scenario?

glenn-jocher commented 3 years ago

@justAyaan I can't advise you on custom training. Experiment if you want to understand the effects of a variable on your results.

Samjith888 commented 3 years ago

Thanks for the great work.

I'm training the model with default --img-size 640. Then i can see the following yolov5

But when i train by using a higher image size --img-size 1024 , couldn't see the k-means for custom anchor generation and new anchor generated model. Does it meant that no new kmeans custom anchor generation with this higher --img-size ? y2

glenn-jocher commented 3 years ago

@Samjith888 autoanchor only runs when the best possible recall is under threshold, so in your second example it's judged that at img size 1024 the best possible recall is sufficiently high to use the default anchors rather than computing new anchors.

Samjith888 commented 3 years ago

I have very small objects in the dataset, is there anything else should i add for small object detection ?

Samjith888 commented 3 years ago

--single-cls ,--rect , --evolve and --hyp

I couldn't find a detailed information about above flags, please explain..

ClassifierPower commented 3 years ago

When I use yolov5 to train a custom data set, I have modified the nc value of my yaml file and the nc value of yolov5s.yaml file, but it keeps returning AssertionError: Label class 2 exceeds nc=2 in /content/Garbage_data/Garbage.yaml. Possible class labels are 0-1,Hope to get your reply

glenn-jocher commented 3 years ago

@mml438659613 as the message states, your dataset has two classes, and your labels can only show class 0 or 1. You have incorrect labels outside of this permitted range.

constantinfite commented 3 years ago

Hello, I'm using yolov5 on custom data set of dimension 1352*760, should I use the --rect option ? Also when I train my model the precision and the recall stay at 0, do you have an explanation for this ? My objects are very small, you can see below

GH033833_1_160_0_0

glenn-jocher commented 3 years ago

@constantinfite train using all default settings and check your jpgs as stated in the tutorial.

constantinfite commented 3 years ago

@glenn-jocher my command for training is !python train.py --img 640 --batch 16 --epochs 5 --data ./data.yaml --cfg ./models/yolov5s.yaml --weights ''

The organization of my folder image

My data.yaml file located in yolov5 folder image

glenn-jocher commented 3 years ago

@constantinfite train at least 300 epochs, or 1000 epochs if you don't see results after 300.