ultralytics / yolov3

YOLOv3 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
10.19k stars 3.45k forks source link

Detect any object as one label(person), When I train coco 80 class #1375

Closed lixiaohui2020 closed 4 years ago

lixiaohui2020 commented 4 years ago

🚀 Feature

Hi, I detect images and find all objects as one label person, after i train coco 80 class. I don't know why?
@glenn-jocher hope your help!

Motivation

I only remend parser, the training codes as follow:

    parser = argparse.ArgumentParser()
    parser.add_argument('--epochs', type=int, default=300)  # 500200 batches at bs 16, 117263 COCO images = 273 epochs
    parser.add_argument('--batch-size', type=int, default=16)  # effective bs = batch_size * accumulate = 16 * 4 = 64
    parser.add_argument('--cfg', type=str, default='cfg/yolo-tiny_v2_c80.cfg', help='*.cfg path')
    parser.add_argument('--data', type=str, default='cfg/coco.data', help='*.data path')
    parser.add_argument('--multi-scale', action='store_true', help='adjust (67%% - 150%%) img_size every 10 batches')
    parser.add_argument('--img-size', nargs='+', type=int, default=[416, 416], help='[min_train, max-train, test]')
    parser.add_argument('--rect', action='store_true', help='rectangular training')
    parser.add_argument('--resume', action='store_true', help='resume training from last.pt')
    parser.add_argument('--nosave', action='store_true', help='only save final checkpoint')
    parser.add_argument('--notest', action='store_false', help='only test final epoch')
    parser.add_argument('--evolve', action='store_true', help='evolve hyperparameters')
    parser.add_argument('--bucket', type=str, default='', help='gsutil bucket')
    parser.add_argument('--cache-images', action='store_true', help='cache images for faster training')
    parser.add_argument('--pretrained_cfg', type=str, default='cfg/yolo-tiny.cfg', help='cfg file')
    parser.add_argument('--weights', type=str, default='weights/yolov4-tiny.weights', help='initial weights path')
    parser.add_argument('--model_flag', type=str, default='valid', help='initial weights path')
    parser.add_argument('--name', default='yolo-tiny_v2_c80',
                        help='renames results.txt to results_name.txt if supplied')
    parser.add_argument('--device', default='4', help='device id (i.e. 0 or 0,1 or cpu)')
    parser.add_argument('--adam', action='store_true', help='use adam optimizer')
    parser.add_argument('--single-cls', action='store_true', help='train as single-class dataset')
    parser.add_argument('--freeze-layers', action='store_true', help='Freeze non-output layers')
    parser.add_argument("--checkpoints_path", type=str, default='checkpoints/yolo-tiny_v2_c80')

data cfg:

classes= 80
train=data/coco/train2017.txt
train_label=None
valid=data/coco/val2017.txt
valid_label=None
names=data/coco.names

detect codes:

    parser = argparse.ArgumentParser()
    parser.add_argument('--cfg', type=str, default='cfg/yolo-tiny_v2_c80.cfg', help='*.cfg path')
    parser.add_argument('--names', type=str, default='data/coco.names',
                        help='*.names path')  
    parser.add_argument('--weights', type=str, default='checkpoints/yolo-tiny_v2_c80/best.pt',
                        help='weights path') 
    parser.add_argument('--source', type=str, default='data/samples', help='source')  # input file/folder, 0 for webcam
    parser.add_argument('--output', type=str, default='output', help='output folder')  # output folder
    parser.add_argument('--multi_label', action='store_true', help='multi label or sigle class')
    parser.add_argument('--img-size', type=int, default=416, help='inference size (pixels)')
    parser.add_argument('--conf-thres', type=float, default=0.3, help='object confidence threshold')
    parser.add_argument('--iou-thres', type=float, default=0.6, help='IOU threshold for NMS')
    parser.add_argument('--fourcc', type=str, default='mp4v', help='output video codec (verify ffmpeg support)')
    parser.add_argument('--half', action='store_true', help='half precision FP16 inference')
    parser.add_argument('--device', default='0', help='device id (i.e. 0 or 0,1) or cpu')
    parser.add_argument('--view-img', action='store_true', help='display results')
    parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
    parser.add_argument('--classes', nargs='+', type=int, help='filter by class')
    parser.add_argument('--agnostic-nms', action='store_true', help='class-agnostic NMS')
    parser.add_argument('--augment', action='store_true', help='augmented inference')

detect results:

000000006012 000000492878

github-actions[bot] commented 4 years ago

Hello @lixiaohui2020, thank you for your interest in our work! Ultralytics has open-sourced YOLOv5 at https://github.com/ultralytics/yolov5, featuring faster, lighter and more accurate object detection. YOLOv5 is recommended for all new projects.

To continue with this repo, please visit our Custom Training Tutorial to get started, and see our Google Colab Notebook, Docker Image, and GCP Quickstart Guide for example environments.

If this is a bug report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom model or data training question, please note that Ultralytics does not provide free personal support. As a leader in vision ML and AI, we do offer professional consulting, from simple expert advice up to delivery of fully customized, end-to-end production solutions for our clients, such as:

For more information please visit https://www.ultralytics.com.

lixiaohui2020 commented 4 years ago

@glenn-jocher I know the reason and I use the cache_label(label.npy) with all labels zeros

glenn-jocher commented 4 years ago

@lixiaohui2020 you should try yolov5, it has better dataloading and may solve this problem.

github-actions[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.