Megvii-BaseDetection / YOLOX

YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/
Apache License 2.0
9.43k stars 2.21k forks source link

The mAP is always 0 when I train on my custom data in coco format? #504

Closed Hezhexi2002 closed 3 years ago

Hezhexi2002 commented 3 years ago

I tried to train on my custom data in coco format but no matter how I train for 10 epochs or 300 epochs I still get mAP=0,my custom data is originally in YOLOV5 format so I use this https://github.com/RapidAI/YOLO2COCO to convert my YOLOV5 format data into coco format and I also check the label which is correct.Besides I also have changed the classes in yolox/data/datasets/coco_classes.py to my own classes.However the result disappointed me again,then I tried to use the datasets https://drive.google.com/file/d/16N3u36ycNd70m23IM7vMuRQXejAJY9Fs/view?usp=sharing you guys suggest in the train_custom_data.md,but for well-known reasons in China I can hardly download it,so I also suggest you provide a BaiduYun version which is more friendly to our students. Here is my train_log.txt: 2021-08-15 21:54:11.090 | INFO | yolox.core.trainer:before_train:126 - args: Namespace(experiment_name='606', name=None, dist_backend='nccl', dist_url=None, batch_size=8, devices=0, exp_file='exps/example/custom/606.py', resume=False, ckpt='yolox_s.pth', start_epoch=None, num_machines=1, machine_rank=0, fp16=True, occupy=False, opts=[]) 2021-08-15 21:54:11.091 | INFO | yolox.core.trainer:before_train:127 - exp value: ╒══════════════════╤════════════════════════════╕ │ keys │ values │ ╞══════════════════╪════════════════════════════╡ │ seed │ None │ ├──────────────────┼────────────────────────────┤ │ output_dir │ './YOLOX_outputs' │ ├──────────────────┼────────────────────────────┤ │ print_interval │ 10 │ ├──────────────────┼────────────────────────────┤ │ eval_interval │ 1 │ ├──────────────────┼────────────────────────────┤ │ num_classes │ 8 │ ├──────────────────┼────────────────────────────┤ │ depth │ 0.33 │ ├──────────────────┼────────────────────────────┤ │ width │ 0.5 │ ├──────────────────┼────────────────────────────┤ │ data_num_workers │ 0 │ ├──────────────────┼────────────────────────────┤ │ input_size │ (640, 640) │ ├──────────────────┼────────────────────────────┤ │ random_size │ (14, 26) │ ├──────────────────┼────────────────────────────┤ │ data_dir │ 'datasets/coco128' │ ├──────────────────┼────────────────────────────┤ │ train_ann │ 'instances_train2017.json' │ ├──────────────────┼────────────────────────────┤ │ val_ann │ 'instances_val2017.json' │ ├──────────────────┼────────────────────────────┤ │ degrees │ 10.0 │ ├──────────────────┼────────────────────────────┤ │ translate │ 0.1 │ ├──────────────────┼────────────────────────────┤ │ scale │ (0.1, 2) │ ├──────────────────┼────────────────────────────┤ │ mscale │ (0.8, 1.6) │ ├──────────────────┼────────────────────────────┤ │ shear │ 2.0 │ ├──────────────────┼────────────────────────────┤ │ perspective │ 0.0 │ ├──────────────────┼────────────────────────────┤ │ enable_mixup │ True │ ├──────────────────┼────────────────────────────┤ │ warmup_epochs │ 5 │ ├──────────────────┼────────────────────────────┤ │ max_epoch │ 10 │ ├──────────────────┼────────────────────────────┤ │ warmup_lr │ 0 │ ├──────────────────┼────────────────────────────┤ │ basic_lr_per_img │ 0.00015625 │ ├──────────────────┼────────────────────────────┤ │ scheduler │ 'yoloxwarmcos' │ ├──────────────────┼────────────────────────────┤ │ no_aug_epochs │ 15 │ ├──────────────────┼────────────────────────────┤ │ min_lr_ratio │ 0.05 │ ├──────────────────┼────────────────────────────┤ │ ema │ True │ ├──────────────────┼────────────────────────────┤ │ weight_decay │ 0.0005 │ ├──────────────────┼────────────────────────────┤ │ momentum │ 0.9 │ ├──────────────────┼────────────────────────────┤ │ exp_name │ '606' │ ├──────────────────┼────────────────────────────┤ │ test_size │ (640, 640) │ ├──────────────────┼────────────────────────────┤ │ test_conf │ 0.01 │ ├──────────────────┼────────────────────────────┤ │ nmsthre │ 0.65 │ ╘══════════════════╧════════════════════════════╛ 2021-08-15 21:54:11.239 | INFO | yolox.core.trainer:before_train:132 - Model Summary: Params: 8.94M, Gflops: 26.65 2021-08-15 21:54:13.208 | INFO | apex.amp.frontend:initialize:328 - Selected optimization level O1: Insert automatic casts around Pytorch functions and Tensor methods. 2021-08-15 21:54:13.208 | INFO | apex.amp.frontend:initialize:329 - Defaults for this optimization level are: 2021-08-15 21:54:13.209 | INFO | apex.amp.frontend:initialize:331 - enabled : True 2021-08-15 21:54:13.209 | INFO | apex.amp.frontend:initialize:331 - opt_level : O1 2021-08-15 21:54:13.209 | INFO | apex.amp.frontend:initialize:331 - cast_model_type : None 2021-08-15 21:54:13.210 | INFO | apex.amp.frontend:initialize:331 - patch_torch_functions : True 2021-08-15 21:54:13.210 | INFO | apex.amp.frontend:initialize:331 - keep_batchnorm_fp32 : None 2021-08-15 21:54:13.210 | INFO | apex.amp.frontend:initialize:331 - master_weights : None 2021-08-15 21:54:13.210 | INFO | apex.amp.frontend:initialize:331 - loss_scale : dynamic 2021-08-15 21:54:13.211 | INFO | apex.amp.frontend:initialize:336 - Processing user overrides (additional kwargs that are not None)... 2021-08-15 21:54:13.211 | INFO | apex.amp.frontend:initialize:354 - After processing overrides, optimization options are: 2021-08-15 21:54:13.211 | INFO | apex.amp.frontend:initialize:356 - enabled : True 2021-08-15 21:54:13.212 | INFO | apex.amp.frontend:initialize:356 - opt_level : O1 2021-08-15 21:54:13.213 | INFO | apex.amp.frontend:initialize:356 - cast_model_type : None 2021-08-15 21:54:13.214 | INFO | apex.amp.frontend:initialize:356 - patch_torch_functions : True 2021-08-15 21:54:13.215 | INFO | apex.amp.frontend:initialize:356 - keep_batchnorm_fp32 : None 2021-08-15 21:54:13.216 | INFO | apex.amp.frontend:initialize:356 - master_weights : None 2021-08-15 21:54:13.217 | INFO | apex.amp.frontend:initialize:356 - loss_scale : dynamic 2021-08-15 21:54:13.221 | INFO | apex.amp.scaler:init:64 - Warning: multi_tensor_applier fused unscale kernel is unavailable, possibly because apex was installed without --cuda_ext --cpp_ext. Using Python fallback. Original ImportError was: ModuleNotFoundError("No module named 'amp_C'") 2021-08-15 21:54:13.223 | INFO | yolox.core.trainer:resume_train:292 - loading checkpoint for fine tuning 2021-08-15 21:54:13.351 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.0.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.0.weight in model is torch.Size([8, 128, 1, 1]). 2021-08-15 21:54:13.352 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.0.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.0.bias in model is torch.Size([8]). 2021-08-15 21:54:13.352 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.1.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.1.weight in model is torch.Size([8, 128, 1, 1]). 2021-08-15 21:54:13.352 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.1.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.1.bias in model is torch.Size([8]). 2021-08-15 21:54:13.352 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.2.weight in checkpoint is torch.Size([80, 128, 1, 1]), while shape of head.cls_preds.2.weight in model is torch.Size([8, 128, 1, 1]). 2021-08-15 21:54:13.353 | WARNING | yolox.utils.checkpoint:load_ckpt:24 - Shape of head.cls_preds.2.bias in checkpoint is torch.Size([80]), while shape of head.cls_preds.2.bias in model is torch.Size([8]). 2021-08-15 21:54:13.377 | INFO | yolox.data.datasets.coco:init:43 - loading annotations into memory... 2021-08-15 21:54:13.405 | INFO | yolox.data.datasets.coco:init:43 - Done (t=0.03s) 2021-08-15 21:54:13.406 | INFO | pycocotools.coco:init:89 - creating index... 2021-08-15 21:54:13.407 | INFO | pycocotools.coco:init:89 - index created! 2021-08-15 21:54:13.433 | INFO | yolox.core.trainer:before_train:153 - init prefetcher, this might take one minute or less... 2021-08-15 21:54:13.746 | INFO | yolox.data.datasets.coco:init:43 - loading annotations into memory... 2021-08-15 21:54:13.749 | INFO | yolox.data.datasets.coco:init:43 - Done (t=0.00s) 2021-08-15 21:54:13.749 | INFO | pycocotools.coco:init:89 - creating index... 2021-08-15 21:54:13.749 | INFO | pycocotools.coco:init:89 - index created! 2021-08-15 21:54:13.759 | INFO | yolox.core.trainer:before_train:183 - Training start... 2021-08-15 21:54:13.762 | INFO | yolox.core.trainer:before_train:184 - YOLOX( (backbone): YOLOPAFPN( (backbone): CSPDarknet( (stem): Focus( (conv): BaseConv( (conv): Conv2d(12, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (dark2): Sequential( (0): BaseConv( (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): CSPLayer( (conv1): BaseConv( (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(64, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(32, 32, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(32, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) ) (dark3): Sequential( (0): BaseConv( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): CSPLayer( (conv1): BaseConv( (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (1): Bottleneck( (conv1): BaseConv( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (2): Bottleneck( (conv1): BaseConv( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) ) (dark4): Sequential( (0): BaseConv( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): CSPLayer( (conv1): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (1): Bottleneck( (conv1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (2): Bottleneck( (conv1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) ) (dark5): Sequential( (0): BaseConv( (conv): Conv2d(256, 512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): SPPBottleneck( (conv1): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): ModuleList( (0): MaxPool2d(kernel_size=5, stride=1, padding=2, dilation=1, ceil_mode=False) (1): MaxPool2d(kernel_size=9, stride=1, padding=4, dilation=1, ceil_mode=False) (2): MaxPool2d(kernel_size=13, stride=1, padding=6, dilation=1, ceil_mode=False) ) (conv2): BaseConv( (conv): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (2): CSPLayer( (conv1): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) ) ) (upsample): Upsample(scale_factor=2.0, mode=nearest) (lateral_conv0): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (C3_p4): CSPLayer( (conv1): BaseConv( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) (reduce_conv1): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (C3_p3): CSPLayer( (conv1): BaseConv( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(64, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) (bu_conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (C3_n3): CSPLayer( (conv1): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) (bu_conv1): BaseConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (C3_n4): CSPLayer( (conv1): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv3): BaseConv( (conv): Conv2d(512, 512, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(512, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (m): Sequential( (0): Bottleneck( (conv1): BaseConv( (conv): Conv2d(256, 256, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (conv2): BaseConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(256, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) ) ) (head): YOLOXHead( (cls_convs): ModuleList( (0): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (1): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (2): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) (reg_convs): ModuleList( (0): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (1): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (2): Sequential( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) ) (cls_preds): ModuleList( (0): Conv2d(128, 8, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(128, 8, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(128, 8, kernel_size=(1, 1), stride=(1, 1)) ) (reg_preds): ModuleList( (0): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(128, 4, kernel_size=(1, 1), stride=(1, 1)) ) (obj_preds): ModuleList( (0): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1)) (1): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1)) (2): Conv2d(128, 1, kernel_size=(1, 1), stride=(1, 1)) ) (stems): ModuleList( (0): BaseConv( (conv): Conv2d(128, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (1): BaseConv( (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) (2): BaseConv( (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False) (bn): BatchNorm2d(128, eps=0.001, momentum=0.03, affine=True, track_running_stats=True) (act): SiLU(inplace=True) ) ) (l1_loss): L1Loss() (bcewithlog_loss): BCEWithLogitsLoss() (iou_loss): IOUloss() ) ) 2021-08-15 21:54:13.766 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch1 2021-08-15 21:54:13.766 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:54:13.767 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:54:16.905 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 32768.0 2021-08-15 21:54:17.337 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 16384.0 2021-08-15 21:54:17.771 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 8192.0 2021-08-15 21:54:18.230 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 4096.0 2021-08-15 21:54:18.684 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 2048.0 2021-08-15 21:54:19.135 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 1024.0 2021-08-15 21:54:20.592 | INFO | apex.amp.handle:skip_step:138 - Gradient overflow. Skipping step, loss scaler 0 reducing loss scale to 512.0 2021-08-15 21:54:21.105 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 10/88, mem: 3017Mb, iter_time: 0.733s, data_time: 0.208s, total_loss: 21.5, iou_loss: 4.2, l1_loss: 3.2, conf_loss: 12.5, cls_loss: 1.5, lr: 6.457e-07, size: 640, ETA: 0:10:38 2021-08-15 21:54:25.868 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 20/88, mem: 3017Mb, iter_time: 0.476s, data_time: 0.201s, total_loss: 18.7, iou_loss: 4.2, l1_loss: 2.9, conf_loss: 10.2, cls_loss: 1.5, lr: 2.583e-06, size: 640, ETA: 0:08:39 2021-08-15 21:54:33.831 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 30/88, mem: 3881Mb, iter_time: 0.796s, data_time: 0.254s, total_loss: 16.9, iou_loss: 4.1, l1_loss: 3.2, conf_loss: 8.0, cls_loss: 1.6, lr: 5.811e-06, size: 704, ETA: 0:09:28 2021-08-15 21:54:41.413 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 40/88, mem: 3881Mb, iter_time: 0.758s, data_time: 0.219s, total_loss: 14.7, iou_loss: 3.9, l1_loss: 2.7, conf_loss: 6.7, cls_loss: 1.3, lr: 1.033e-05, size: 672, ETA: 0:09:40 2021-08-15 21:54:46.458 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 50/88, mem: 3881Mb, iter_time: 0.504s, data_time: 0.219s, total_loss: 13.2, iou_loss: 3.5, l1_loss: 2.6, conf_loss: 6.0, cls_loss: 1.2, lr: 1.614e-05, size: 672, ETA: 0:09:02 2021-08-15 21:54:51.804 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 60/88, mem: 3881Mb, iter_time: 0.534s, data_time: 0.146s, total_loss: 12.2, iou_loss: 3.7, l1_loss: 2.3, conf_loss: 5.3, cls_loss: 0.9, lr: 2.324e-05, size: 512, ETA: 0:08:39 2021-08-15 21:54:55.538 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 70/88, mem: 3881Mb, iter_time: 0.373s, data_time: 0.143s, total_loss: 11.0, iou_loss: 3.5, l1_loss: 1.9, conf_loss: 4.8, cls_loss: 0.8, lr: 3.164e-05, size: 512, ETA: 0:08:02 2021-08-15 21:55:02.818 | INFO | yolox.core.trainer:after_iter:242 - epoch: 1/10, iter: 80/88, mem: 3881Mb, iter_time: 0.728s, data_time: 0.184s, total_loss: 8.8, iou_loss: 2.8, l1_loss: 1.4, conf_loss: 3.9, cls_loss: 0.7, lr: 4.132e-05, size: 576, ETA: 0:08:10 2021-08-15 21:55:10.084 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:55:17.377 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:55:17.451 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:55:17.473 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.02s) 2021-08-15 21:55:17.474 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:55:17.475 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:55:17.546 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.26 ms, Average NMS time: 1.11 ms, Average inference time: 7.37 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:55:17.547 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:55:17.696 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch2 2021-08-15 21:55:17.696 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:55:17.697 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:55:22.867 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 10/88, mem: 3881Mb, iter_time: 0.517s, data_time: 0.223s, total_loss: 7.6, iou_loss: 2.6, l1_loss: 1.3, conf_loss: 3.1, cls_loss: 0.7, lr: 6.201e-05, size: 640, ETA: 0:08:10 2021-08-15 21:55:26.814 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 20/88, mem: 3881Mb, iter_time: 0.394s, data_time: 0.143s, total_loss: 7.0, iou_loss: 2.6, l1_loss: 1.2, conf_loss: 2.5, cls_loss: 0.7, lr: 7.531e-05, size: 512, ETA: 0:07:47 2021-08-15 21:55:32.197 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 30/88, mem: 3881Mb, iter_time: 0.538s, data_time: 0.140s, total_loss: 5.2, iou_loss: 2.0, l1_loss: 0.8, conf_loss: 1.9, cls_loss: 0.6, lr: 8.990e-05, size: 544, ETA: 0:07:36 2021-08-15 21:55:40.492 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 40/88, mem: 3881Mb, iter_time: 0.829s, data_time: 0.271s, total_loss: 6.5, iou_loss: 2.1, l1_loss: 1.2, conf_loss: 2.6, cls_loss: 0.6, lr: 1.058e-04, size: 800, ETA: 0:07:44 2021-08-15 21:55:45.579 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 50/88, mem: 3881Mb, iter_time: 0.508s, data_time: 0.204s, total_loss: 5.5, iou_loss: 2.0, l1_loss: 0.9, conf_loss: 2.1, cls_loss: 0.5, lr: 1.230e-04, size: 640, ETA: 0:07:32 2021-08-15 21:55:53.270 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 60/88, mem: 3881Mb, iter_time: 0.769s, data_time: 0.252s, total_loss: 4.2, iou_loss: 1.6, l1_loss: 0.8, conf_loss: 1.5, cls_loss: 0.5, lr: 1.414e-04, size: 736, ETA: 0:07:34 2021-08-15 21:55:57.769 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 70/88, mem: 3881Mb, iter_time: 0.449s, data_time: 0.175s, total_loss: 5.2, iou_loss: 2.2, l1_loss: 0.9, conf_loss: 1.5, cls_loss: 0.6, lr: 1.612e-04, size: 544, ETA: 0:07:20 2021-08-15 21:56:01.607 | INFO | yolox.core.trainer:after_iter:242 - epoch: 2/10, iter: 80/88, mem: 3881Mb, iter_time: 0.383s, data_time: 0.136s, total_loss: 3.3, iou_loss: 1.3, l1_loss: 0.5, conf_loss: 1.0, cls_loss: 0.4, lr: 1.822e-04, size: 512, ETA: 0:07:04 2021-08-15 21:56:04.585 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:56:10.799 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:56:10.809 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:56:10.818 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 21:56:10.819 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:56:10.819 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:56:10.856 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.30 ms, Average NMS time: 1.09 ms, Average inference time: 7.39 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:56:10.856 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:56:11.022 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch3 2021-08-15 21:56:11.023 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:56:11.023 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:56:16.385 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 10/88, mem: 3881Mb, iter_time: 0.536s, data_time: 0.226s, total_loss: 7.6, iou_loss: 2.2, l1_loss: 1.1, conf_loss: 3.6, cls_loss: 0.7, lr: 2.234e-04, size: 800, ETA: 0:06:44 2021-08-15 21:56:21.538 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 20/88, mem: 3881Mb, iter_time: 0.514s, data_time: 0.203s, total_loss: 5.3, iou_loss: 2.3, l1_loss: 1.0, conf_loss: 1.4, cls_loss: 0.6, lr: 2.480e-04, size: 576, ETA: 0:06:36 2021-08-15 21:56:26.631 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 30/88, mem: 3881Mb, iter_time: 0.509s, data_time: 0.218s, total_loss: 6.6, iou_loss: 2.5, l1_loss: 1.5, conf_loss: 1.9, cls_loss: 0.7, lr: 2.740e-04, size: 736, ETA: 0:06:28 2021-08-15 21:56:31.465 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 40/88, mem: 3881Mb, iter_time: 0.483s, data_time: 0.191s, total_loss: 5.9, iou_loss: 2.5, l1_loss: 1.0, conf_loss: 1.7, cls_loss: 0.8, lr: 3.012e-04, size: 544, ETA: 0:06:19 2021-08-15 21:56:35.555 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 50/88, mem: 3881Mb, iter_time: 0.408s, data_time: 0.150s, total_loss: 3.5, iou_loss: 1.5, l1_loss: 0.5, conf_loss: 1.1, cls_loss: 0.5, lr: 3.298e-04, size: 544, ETA: 0:06:09 2021-08-15 21:56:40.253 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 60/88, mem: 3881Mb, iter_time: 0.469s, data_time: 0.191s, total_loss: 4.6, iou_loss: 2.1, l1_loss: 1.0, conf_loss: 0.8, cls_loss: 0.7, lr: 3.596e-04, size: 640, ETA: 0:06:01 2021-08-15 21:56:44.927 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 70/88, mem: 3881Mb, iter_time: 0.467s, data_time: 0.189s, total_loss: 3.2, iou_loss: 1.6, l1_loss: 0.6, conf_loss: 0.4, cls_loss: 0.5, lr: 3.907e-04, size: 576, ETA: 0:05:53 2021-08-15 21:56:49.526 | INFO | yolox.core.trainer:after_iter:242 - epoch: 3/10, iter: 80/88, mem: 3881Mb, iter_time: 0.459s, data_time: 0.126s, total_loss: 9.3, iou_loss: 3.6, l1_loss: 2.1, conf_loss: 2.8, cls_loss: 0.8, lr: 4.231e-04, size: 448, ETA: 0:05:45 2021-08-15 21:56:52.664 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:56:58.751 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:56:58.770 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:56:58.779 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 21:56:58.779 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:56:58.780 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:56:58.851 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.29 ms, Average NMS time: 1.10 ms, Average inference time: 7.39 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:56:58.852 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:56:59.009 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch4 2021-08-15 21:56:59.010 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:56:59.010 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:57:03.834 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 10/88, mem: 3881Mb, iter_time: 0.482s, data_time: 0.194s, total_loss: 3.5, iou_loss: 1.5, l1_loss: 0.6, conf_loss: 0.9, cls_loss: 0.5, lr: 4.847e-04, size: 640, ETA: 0:05:30 2021-08-15 21:57:08.106 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 20/88, mem: 3881Mb, iter_time: 0.427s, data_time: 0.159s, total_loss: 11.3, iou_loss: 4.4, l1_loss: 3.9, conf_loss: 2.2, cls_loss: 0.8, lr: 5.208e-04, size: 448, ETA: 0:05:22 2021-08-15 21:57:12.441 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 30/88, mem: 3881Mb, iter_time: 0.433s, data_time: 0.108s, total_loss: 5.2, iou_loss: 2.3, l1_loss: 0.9, conf_loss: 1.5, cls_loss: 0.6, lr: 5.581e-04, size: 480, ETA: 0:05:15 2021-08-15 21:57:16.038 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 40/88, mem: 3881Mb, iter_time: 0.359s, data_time: 0.125s, total_loss: 4.3, iou_loss: 2.0, l1_loss: 0.8, conf_loss: 0.8, cls_loss: 0.7, lr: 5.967e-04, size: 544, ETA: 0:05:06 2021-08-15 21:57:19.877 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 50/88, mem: 3881Mb, iter_time: 0.384s, data_time: 0.136s, total_loss: 2.7, iou_loss: 1.3, l1_loss: 0.4, conf_loss: 0.6, cls_loss: 0.4, lr: 6.366e-04, size: 512, ETA: 0:04:58 2021-08-15 21:57:24.603 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 60/88, mem: 3881Mb, iter_time: 0.472s, data_time: 0.196s, total_loss: 11.9, iou_loss: 3.5, l1_loss: 2.2, conf_loss: 5.3, cls_loss: 0.9, lr: 6.778e-04, size: 800, ETA: 0:04:52 2021-08-15 21:57:30.386 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 70/88, mem: 3881Mb, iter_time: 0.578s, data_time: 0.243s, total_loss: 7.8, iou_loss: 2.7, l1_loss: 1.4, conf_loss: 3.1, cls_loss: 0.7, lr: 7.203e-04, size: 576, ETA: 0:04:47 2021-08-15 21:57:34.968 | INFO | yolox.core.trainer:after_iter:242 - epoch: 4/10, iter: 80/88, mem: 3881Mb, iter_time: 0.457s, data_time: 0.187s, total_loss: 5.8, iou_loss: 2.3, l1_loss: 1.2, conf_loss: 1.8, cls_loss: 0.6, lr: 7.640e-04, size: 704, ETA: 0:04:41 2021-08-15 21:57:38.888 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:57:45.180 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:57:45.223 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:57:45.241 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.02s) 2021-08-15 21:57:45.241 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:57:45.242 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:57:45.331 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.29 ms, Average NMS time: 1.14 ms, Average inference time: 7.43 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:57:45.332 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:57:45.498 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch5 2021-08-15 21:57:45.498 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:57:45.498 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:57:50.643 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 10/88, mem: 3881Mb, iter_time: 0.513s, data_time: 0.117s, total_loss: 12.8, iou_loss: 3.9, l1_loss: 2.5, conf_loss: 5.6, cls_loss: 0.8, lr: 8.461e-04, size: 608, ETA: 0:04:31 2021-08-15 21:57:58.209 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 20/88, mem: 3881Mb, iter_time: 0.756s, data_time: 0.216s, total_loss: 16.1, iou_loss: 4.5, l1_loss: 4.9, conf_loss: 5.5, cls_loss: 1.2, lr: 8.935e-04, size: 832, ETA: 0:04:29 2021-08-15 21:58:04.978 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 30/88, mem: 3881Mb, iter_time: 0.676s, data_time: 0.296s, total_loss: 20.1, iou_loss: 4.8, l1_loss: 5.6, conf_loss: 8.5, cls_loss: 1.2, lr: 9.422e-04, size: 448, ETA: 0:04:25 2021-08-15 21:58:08.970 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 40/88, mem: 3881Mb, iter_time: 0.398s, data_time: 0.132s, total_loss: 8.5, iou_loss: 3.6, l1_loss: 2.0, conf_loss: 2.1, cls_loss: 0.8, lr: 9.921e-04, size: 544, ETA: 0:04:18 2021-08-15 21:58:13.747 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 50/88, mem: 3881Mb, iter_time: 0.477s, data_time: 0.198s, total_loss: 18.3, iou_loss: 4.0, l1_loss: 2.4, conf_loss: 10.6, cls_loss: 1.3, lr: 1.043e-03, size: 832, ETA: 0:04:12 2021-08-15 21:58:20.429 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 60/88, mem: 3881Mb, iter_time: 0.667s, data_time: 0.292s, total_loss: 11.5, iou_loss: 4.4, l1_loss: 2.9, conf_loss: 3.1, cls_loss: 1.1, lr: 1.096e-03, size: 544, ETA: 0:04:09 2021-08-15 21:58:24.532 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 70/88, mem: 3881Mb, iter_time: 0.410s, data_time: 0.155s, total_loss: 7.2, iou_loss: 3.2, l1_loss: 1.0, conf_loss: 2.1, cls_loss: 0.7, lr: 1.150e-03, size: 608, ETA: 0:04:02 2021-08-15 21:58:28.869 | INFO | yolox.core.trainer:after_iter:242 - epoch: 5/10, iter: 80/88, mem: 3881Mb, iter_time: 0.433s, data_time: 0.164s, total_loss: 11.0, iou_loss: 4.2, l1_loss: 1.8, conf_loss: 4.1, cls_loss: 1.0, lr: 1.205e-03, size: 480, ETA: 0:03:56 2021-08-15 21:58:31.620 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:58:38.010 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:58:38.093 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:58:38.158 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.06s) 2021-08-15 21:58:38.158 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:58:38.160 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:58:38.224 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.54 ms, Average NMS time: 1.14 ms, Average inference time: 7.68 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:58:38.225 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:58:38.376 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch6 2021-08-15 21:58:38.376 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:58:38.377 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:58:44.452 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 10/88, mem: 3881Mb, iter_time: 0.607s, data_time: 0.277s, total_loss: 10.6, iou_loss: 4.4, l1_loss: 2.5, conf_loss: 3.0, cls_loss: 0.7, lr: 6.250e-05, size: 768, ETA: 0:03:46 2021-08-15 21:58:47.853 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 20/88, mem: 3881Mb, iter_time: 0.340s, data_time: 0.101s, total_loss: 5.3, iou_loss: 2.6, l1_loss: 0.6, conf_loss: 1.4, cls_loss: 0.7, lr: 6.250e-05, size: 448, ETA: 0:03:39 2021-08-15 21:58:54.795 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 30/88, mem: 3881Mb, iter_time: 0.693s, data_time: 0.325s, total_loss: 12.1, iou_loss: 4.3, l1_loss: 2.6, conf_loss: 4.5, cls_loss: 0.7, lr: 6.250e-05, size: 832, ETA: 0:03:35 2021-08-15 21:58:58.460 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 40/88, mem: 3881Mb, iter_time: 0.366s, data_time: 0.117s, total_loss: 5.1, iou_loss: 2.0, l1_loss: 0.4, conf_loss: 2.1, cls_loss: 0.6, lr: 6.250e-05, size: 480, ETA: 0:03:28 2021-08-15 21:59:04.487 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 50/88, mem: 3881Mb, iter_time: 0.602s, data_time: 0.278s, total_loss: 9.1, iou_loss: 3.9, l1_loss: 1.8, conf_loss: 2.7, cls_loss: 0.7, lr: 6.250e-05, size: 768, ETA: 0:03:24 2021-08-15 21:59:10.483 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 60/88, mem: 3881Mb, iter_time: 0.599s, data_time: 0.269s, total_loss: 8.2, iou_loss: 3.2, l1_loss: 1.3, conf_loss: 3.0, cls_loss: 0.7, lr: 6.250e-05, size: 768, ETA: 0:03:19 2021-08-15 21:59:15.368 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 70/88, mem: 3881Mb, iter_time: 0.488s, data_time: 0.196s, total_loss: 4.9, iou_loss: 2.3, l1_loss: 0.6, conf_loss: 1.4, cls_loss: 0.6, lr: 6.250e-05, size: 640, ETA: 0:03:14 2021-08-15 21:59:21.506 | INFO | yolox.core.trainer:after_iter:242 - epoch: 6/10, iter: 80/88, mem: 3881Mb, iter_time: 0.613s, data_time: 0.283s, total_loss: 5.8, iou_loss: 2.4, l1_loss: 0.8, conf_loss: 2.0, cls_loss: 0.6, lr: 6.250e-05, size: 736, ETA: 0:03:09 2021-08-15 21:59:26.364 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:59:33.032 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 21:59:33.053 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 21:59:33.080 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.03s) 2021-08-15 21:59:33.080 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 21:59:33.081 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 21:59:33.134 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.32 ms, Average NMS time: 1.17 ms, Average inference time: 7.49 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 21:59:33.134 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 21:59:33.312 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch7 2021-08-15 21:59:33.313 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 21:59:33.313 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 21:59:38.130 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 10/88, mem: 3881Mb, iter_time: 0.481s, data_time: 0.192s, total_loss: 6.9, iou_loss: 3.5, l1_loss: 1.4, conf_loss: 1.3, cls_loss: 0.8, lr: 6.250e-05, size: 576, ETA: 0:03:00 2021-08-15 21:59:43.472 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 20/88, mem: 3881Mb, iter_time: 0.533s, data_time: 0.231s, total_loss: 4.0, iou_loss: 1.7, l1_loss: 0.5, conf_loss: 1.2, cls_loss: 0.6, lr: 6.250e-05, size: 704, ETA: 0:02:54 2021-08-15 21:59:48.087 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 30/88, mem: 3881Mb, iter_time: 0.461s, data_time: 0.181s, total_loss: 4.7, iou_loss: 2.4, l1_loss: 0.7, conf_loss: 1.1, cls_loss: 0.6, lr: 6.250e-05, size: 576, ETA: 0:02:49 2021-08-15 21:59:53.363 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 40/88, mem: 3881Mb, iter_time: 0.527s, data_time: 0.228s, total_loss: 5.8, iou_loss: 2.7, l1_loss: 0.9, conf_loss: 1.6, cls_loss: 0.6, lr: 6.250e-05, size: 736, ETA: 0:02:44 2021-08-15 21:59:58.929 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 50/88, mem: 3881Mb, iter_time: 0.556s, data_time: 0.244s, total_loss: 4.1, iou_loss: 1.7, l1_loss: 0.5, conf_loss: 1.3, cls_loss: 0.5, lr: 6.250e-05, size: 736, ETA: 0:02:38 2021-08-15 22:00:05.210 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 60/88, mem: 3881Mb, iter_time: 0.628s, data_time: 0.278s, total_loss: 3.8, iou_loss: 1.6, l1_loss: 0.5, conf_loss: 1.3, cls_loss: 0.5, lr: 6.250e-05, size: 800, ETA: 0:02:34 2021-08-15 22:00:09.728 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 70/88, mem: 3881Mb, iter_time: 0.451s, data_time: 0.167s, total_loss: 8.5, iou_loss: 3.8, l1_loss: 1.8, conf_loss: 2.0, cls_loss: 0.8, lr: 6.250e-05, size: 544, ETA: 0:02:28 2021-08-15 22:00:14.975 | INFO | yolox.core.trainer:after_iter:242 - epoch: 7/10, iter: 80/88, mem: 3881Mb, iter_time: 0.524s, data_time: 0.225s, total_loss: 3.0, iou_loss: 1.1, l1_loss: 0.3, conf_loss: 1.0, cls_loss: 0.6, lr: 6.250e-05, size: 736, ETA: 0:02:23 2021-08-15 22:00:19.952 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:00:25.995 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 22:00:26.010 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 22:00:26.018 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 22:00:26.018 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 22:00:26.019 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 22:00:26.058 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.28 ms, Average NMS time: 1.08 ms, Average inference time: 7.36 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 22:00:26.058 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:00:26.221 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch8 2021-08-15 22:00:26.221 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 22:00:26.222 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 22:00:31.045 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 10/88, mem: 3881Mb, iter_time: 0.482s, data_time: 0.183s, total_loss: 10.0, iou_loss: 3.8, l1_loss: 2.1, conf_loss: 3.2, cls_loss: 0.9, lr: 6.250e-05, size: 512, ETA: 0:02:13 2021-08-15 22:00:35.180 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 20/88, mem: 3881Mb, iter_time: 0.413s, data_time: 0.155s, total_loss: 5.6, iou_loss: 2.6, l1_loss: 1.0, conf_loss: 1.4, cls_loss: 0.7, lr: 6.250e-05, size: 640, ETA: 0:02:08 2021-08-15 22:00:39.925 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 30/88, mem: 3881Mb, iter_time: 0.474s, data_time: 0.191s, total_loss: 3.9, iou_loss: 1.8, l1_loss: 0.5, conf_loss: 1.0, cls_loss: 0.6, lr: 6.250e-05, size: 672, ETA: 0:02:02 2021-08-15 22:00:44.020 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 40/88, mem: 3881Mb, iter_time: 0.409s, data_time: 0.150s, total_loss: 6.4, iou_loss: 3.3, l1_loss: 1.2, conf_loss: 1.1, cls_loss: 0.8, lr: 6.250e-05, size: 512, ETA: 0:01:57 2021-08-15 22:00:48.270 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 50/88, mem: 3881Mb, iter_time: 0.424s, data_time: 0.167s, total_loss: 4.1, iou_loss: 2.0, l1_loss: 0.6, conf_loss: 1.0, cls_loss: 0.6, lr: 6.250e-05, size: 672, ETA: 0:01:51 2021-08-15 22:00:52.262 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 60/88, mem: 3881Mb, iter_time: 0.398s, data_time: 0.140s, total_loss: 8.9, iou_loss: 3.9, l1_loss: 1.8, conf_loss: 2.4, cls_loss: 0.9, lr: 6.250e-05, size: 448, ETA: 0:01:45 2021-08-15 22:00:57.304 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 70/88, mem: 3881Mb, iter_time: 0.504s, data_time: 0.216s, total_loss: 5.0, iou_loss: 2.3, l1_loss: 0.8, conf_loss: 1.3, cls_loss: 0.7, lr: 6.250e-05, size: 768, ETA: 0:01:40 2021-08-15 22:01:03.148 | INFO | yolox.core.trainer:after_iter:242 - epoch: 8/10, iter: 80/88, mem: 3881Mb, iter_time: 0.584s, data_time: 0.259s, total_loss: 2.8, iou_loss: 1.1, l1_loss: 0.3, conf_loss: 0.9, cls_loss: 0.5, lr: 6.250e-05, size: 736, ETA: 0:01:35 2021-08-15 22:01:07.368 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:01:13.748 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 22:01:13.761 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 22:01:13.771 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 22:01:13.771 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 22:01:13.772 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 22:01:13.809 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.27 ms, Average NMS time: 1.13 ms, Average inference time: 7.40 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 22:01:13.810 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:01:13.969 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch9 2021-08-15 22:01:13.969 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 22:01:13.969 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 22:01:19.724 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 10/88, mem: 3881Mb, iter_time: 0.575s, data_time: 0.261s, total_loss: 5.1, iou_loss: 2.2, l1_loss: 0.7, conf_loss: 1.5, cls_loss: 0.6, lr: 6.250e-05, size: 768, ETA: 0:01:26 2021-08-15 22:01:25.460 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 20/88, mem: 3881Mb, iter_time: 0.573s, data_time: 0.249s, total_loss: 5.0, iou_loss: 2.4, l1_loss: 0.8, conf_loss: 1.1, cls_loss: 0.7, lr: 6.250e-05, size: 640, ETA: 0:01:21 2021-08-15 22:01:30.151 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 30/88, mem: 3881Mb, iter_time: 0.469s, data_time: 0.185s, total_loss: 4.9, iou_loss: 2.4, l1_loss: 0.7, conf_loss: 1.1, cls_loss: 0.6, lr: 6.250e-05, size: 544, ETA: 0:01:16 2021-08-15 22:01:33.860 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 40/88, mem: 3881Mb, iter_time: 0.370s, data_time: 0.127s, total_loss: 5.9, iou_loss: 2.6, l1_loss: 0.7, conf_loss: 2.0, cls_loss: 0.6, lr: 6.250e-05, size: 480, ETA: 0:01:10 2021-08-15 22:01:38.131 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 50/88, mem: 3881Mb, iter_time: 0.427s, data_time: 0.167s, total_loss: 8.6, iou_loss: 3.4, l1_loss: 1.6, conf_loss: 2.9, cls_loss: 0.8, lr: 6.250e-05, size: 768, ETA: 0:01:05 2021-08-15 22:01:43.483 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 60/88, mem: 3881Mb, iter_time: 0.534s, data_time: 0.224s, total_loss: 4.4, iou_loss: 2.3, l1_loss: 0.5, conf_loss: 0.9, cls_loss: 0.6, lr: 6.250e-05, size: 576, ETA: 0:01:00 2021-08-15 22:01:48.007 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 70/88, mem: 3881Mb, iter_time: 0.452s, data_time: 0.178s, total_loss: 3.8, iou_loss: 1.8, l1_loss: 0.5, conf_loss: 1.0, cls_loss: 0.6, lr: 6.250e-05, size: 672, ETA: 0:00:54 2021-08-15 22:01:52.385 | INFO | yolox.core.trainer:after_iter:242 - epoch: 9/10, iter: 80/88, mem: 3881Mb, iter_time: 0.437s, data_time: 0.165s, total_loss: 8.9, iou_loss: 3.7, l1_loss: 1.6, conf_loss: 2.7, cls_loss: 1.0, lr: 6.250e-05, size: 448, ETA: 0:00:49 2021-08-15 22:01:55.374 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:02:01.640 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 22:02:01.653 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 22:02:01.660 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 22:02:01.661 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 22:02:01.661 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 22:02:01.701 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.29 ms, Average NMS time: 1.07 ms, Average inference time: 7.36 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 22:02:01.701 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:02:01.865 | INFO | yolox.core.trainer:before_epoch:192 - ---> start train epoch10 2021-08-15 22:02:01.866 | INFO | yolox.core.trainer:before_epoch:195 - --->No mosaic aug now! 2021-08-15 22:02:01.866 | INFO | yolox.core.trainer:before_epoch:197 - --->Add additional L1 loss now! 2021-08-15 22:02:06.901 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 10/88, mem: 3881Mb, iter_time: 0.503s, data_time: 0.206s, total_loss: 3.8, iou_loss: 1.9, l1_loss: 0.5, conf_loss: 0.8, cls_loss: 0.5, lr: 6.250e-05, size: 608, ETA: 0:00:40 2021-08-15 22:02:11.762 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 20/88, mem: 3881Mb, iter_time: 0.486s, data_time: 0.204s, total_loss: 6.7, iou_loss: 3.0, l1_loss: 1.2, conf_loss: 1.7, cls_loss: 0.8, lr: 6.250e-05, size: 768, ETA: 0:00:34 2021-08-15 22:02:17.996 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 30/88, mem: 3881Mb, iter_time: 0.623s, data_time: 0.285s, total_loss: 7.2, iou_loss: 3.4, l1_loss: 1.5, conf_loss: 1.5, cls_loss: 0.8, lr: 6.250e-05, size: 832, ETA: 0:00:29 2021-08-15 22:02:24.433 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 40/88, mem: 3881Mb, iter_time: 0.643s, data_time: 0.276s, total_loss: 6.7, iou_loss: 3.4, l1_loss: 1.4, conf_loss: 1.2, cls_loss: 0.7, lr: 6.250e-05, size: 576, ETA: 0:00:24 2021-08-15 22:02:28.987 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 50/88, mem: 3881Mb, iter_time: 0.455s, data_time: 0.186s, total_loss: 4.9, iou_loss: 2.3, l1_loss: 0.8, conf_loss: 1.1, cls_loss: 0.6, lr: 6.250e-05, size: 832, ETA: 0:00:19 2021-08-15 22:02:35.567 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 60/88, mem: 3881Mb, iter_time: 0.658s, data_time: 0.286s, total_loss: 4.9, iou_loss: 2.4, l1_loss: 0.8, conf_loss: 1.1, cls_loss: 0.6, lr: 6.250e-05, size: 640, ETA: 0:00:14 2021-08-15 22:02:40.259 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 70/88, mem: 3881Mb, iter_time: 0.469s, data_time: 0.189s, total_loss: 3.3, iou_loss: 1.5, l1_loss: 0.4, conf_loss: 0.9, cls_loss: 0.5, lr: 6.250e-05, size: 640, ETA: 0:00:09 2021-08-15 22:02:45.042 | INFO | yolox.core.trainer:after_iter:242 - epoch: 10/10, iter: 80/88, mem: 3881Mb, iter_time: 0.478s, data_time: 0.196s, total_loss: 3.7, iou_loss: 1.9, l1_loss: 0.6, conf_loss: 0.6, cls_loss: 0.5, lr: 6.250e-05, size: 704, ETA: 0:00:04 2021-08-15 22:02:49.305 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:02:55.436 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:171 - Evaluate in main process... 2021-08-15 22:02:55.449 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - Loading and preparing results... 2021-08-15 22:02:55.457 | INFO | yolox.evaluators.coco_evaluator:evaluate_prediction:204 - DONE (t=0.01s) 2021-08-15 22:02:55.457 | INFO | pycocotools.coco:loadRes:365 - creating index... 2021-08-15 22:02:55.458 | INFO | pycocotools.coco:loadRes:365 - index created! 2021-08-15 22:02:55.498 | INFO | yolox.core.trainer:evaluate_and_save_model:315 - Average forward time: 6.26 ms, Average NMS time: 1.10 ms, Average inference time: 7.36 ms Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000

2021-08-15 22:02:55.499 | INFO | yolox.core.trainer:save_ckpt:324 - Save weights to ./YOLOX_outputs\606 2021-08-15 22:02:55.667 | INFO | yolox.core.trainer:after_train:187 - Training of experiment is done and the best AP is 0.00

As you can see I only train for 10 epochs just for testing but the mAP is 0.00 and I get the same result even when I train for 300 epochs,and this is my own Exp file: `#!/usr/bin/env python3

-- coding:utf-8 --

Copyright (c) Megvii, Inc. and its affiliates.

import os

from yolox.exp import Exp as MyExp

class Exp(MyExp): def init(self): super(Exp, self).init() self.depth = 0.33 self.width = 0.50 self.exp_name = os.path.split(os.path.realpath(file))[1].split(".")[0]

    # Define yourself dataset path
    self.data_dir = "datasets/coco128"
    self.train_ann = "instances_train2017.json"
    self.val_ann = "instances_val2017.json"

    self.num_classes = 8

    self.max_epoch = 10
    self.data_num_workers = 0
    self.eval_interval = 1`

and the coco_classes.py: `#!/usr/bin/env python3

-- coding:utf-8 --

Copyright (c) Megvii, Inc. and its affiliates.

COCO_CLASSES = ( "blue1", "red1", "blue2", "red2", "blue3", "red3", "blue4", "red4", )` So what's wrong with my code?I have struggled into this a whole day,Honestly,I'm about to be crazy now!!!

Bachelorwangwei commented 3 years ago

我也遇到了,loss是正常下降的,但是ap就是0

Hezhexi2002 commented 3 years ago

我也遇到了,loss是正常下降的,但是ap就是0

所以你解决了吗?你也是coco格式数据集训练的吗?

EdisionWew commented 3 years ago

我也遇到了,训练过程评估都是0,我是自己的数据转成的 coco格式

Hezhexi2002 commented 3 years ago

我现在打算用train_custom.md里的那个mini-coco128试一试,但是那是在google - drive上的我又下不下来:-(,不过我看了一下这个mini-coco128的文件结构和我转化后的coco格式数据集是一样的,真的不知道是怎么回事,而且我昨天还用那个训练后的权重试了一下视频推理,平均每帧需要0.3s,我是用的yolox-s,我用yolov5s的话每帧平均7ms,这让我也很吃惊

Hezhexi2002 commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

还没有:-(,但是voc格式mAP为0已经有解决方案了,你在issue可以找到

EdisionWew commented 3 years ago

我现在打算用train_custom.md里的那个mini-coco128试一试,但是那是在google - drive上的我又下不下来:-(,不过我看了一下这个mini-coco128的文件结构和我转化后的coco格式数据集是一样的,真的不知道是怎么回事,而且我昨天还用那个训练后的权重试了一下视频推理,平均每帧需要0.3s,我是用的yolox-s,我用yolov5s的话每帧平均7ms,这让我也很吃惊

这个我试过了,也是评估为0

Hezhexi2002 commented 3 years ago

我现在打算用train_custom.md里的那个mini-coco128试一试,但是那是在google - drive上的我又下不下来:-(,不过我看了一下这个mini-coco128的文件结构和我转化后的coco格式数据集是一样的,真的不知道是怎么回事,而且我昨天还用那个训练后的权重试了一下视频推理,平均每帧需要0.3s,我是用的yolox-s,我用yolov5s的话每帧平均7ms,这让我也很吃惊

这个我试过了,也是评估为0

啊这

Robert-Hopkins commented 3 years ago

请问解决了吗

MangoloD commented 3 years ago

我也遇到了,loss是正常下降的,但是ap就是0

所以你解决了吗?你也是coco格式数据集训练的吗?

我尝试过Coco数据集,是正常的,你可以检查下格式是否正确,同时我也像voc一样写了一篇文章

songtf525 commented 3 years ago

我也遇到了,loss是正常下降的,但是ap就是0

所以你解决了吗?你也是coco格式数据集训练的吗?

我尝试过Coco数据集,是正常的,你可以检查下格式是否正确,同时我也像voc一样写了一篇文章

多少迭代ap才有值?我迭代了10epoch,ap=0,我自己的数据集用的YOLO2COCO那个转的,我自己试了一下,没有问题,文章链接在哪?

MangoloD commented 3 years ago

我也遇到了,loss是正常下降的,但是ap就是0

所以你解决了吗?你也是coco格式数据集训练的吗?

我尝试过Coco数据集,是正常的,你可以检查下格式是否正确,同时我也像voc一样写了一篇文章

多少迭代ap才有值?我迭代了10epoch,ap=0,我自己的数据集用的YOLO2COCO那个转的,我自己试了一下,没有问题,文章链接在哪?

YOLOX训练COCO数据集,但我觉问题可能还是出在了转换上

MangoloD commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

Hezhexi2002 commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

所以是需要用xml转json吗?那这样我就得先yolo2voc再voc2coco吗?

MangoloD commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

所以是需要用xml转json吗?那这样我就得先yolo2voc再voc2coco吗?

不用吧,你只要确保转换出来的Coco数据格式是正确的就行

MangoloD commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

所以是需要用xml转json吗?那这样我就得先yolo2voc再voc2coco吗?

还有就是我这边也已经试验过Yolo数据格式的训练了,也可以正常跑通了

Hezhexi2002 commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

所以是需要用xml转json吗?那这样我就得先yolo2voc再voc2coco吗?

还有就是我这边也已经试验过Yolo数据格式的训练了,也可以正常跑通了

Hezhexi2002 commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

请查看 解决YOLOX训练时AP为0

所以是需要用xml转json吗?那这样我就得先yolo2voc再voc2coco吗?

还有就是我这边也已经试验过Yolo数据格式的训练了,也可以正常跑通了

意思是直接用yolo2coco也行了吗?

songtf525 commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

Hezhexi2002 commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

Hezhexi2002 commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0,我看B站上有人说了这是他们代码的问题,导致读取coco格式数据集时是根据id去拼的而不是依据imagepath

Robert-Hopkins commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

Hezhexi2002 commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

`# !/usr/bin/env python

-- encoding: utf-8 --

@File: yolov5_2_coco.py

@Author: SWHL

@Contact: liekkaskono@163.com

import argparse from pathlib import Path import json import shutil

import cv2 as cv

def read_txt(txt_path): with open(str(txt_path), 'r', encoding='utf-8') as f: data = f.readlines() data = list(map(lambda x: x.rstrip('\n'), data)) return data

def mkdir(dir_path): Path(dir_path).mkdir(parents=True, exist_ok=True)

class YOLOV5ToCOCO(object): def init(self, dir_path): self.src_data = Path(dir_path) self.src = self.src_data.parent self.train_txt_path = self.src_data / 'train.txt' self.val_txt_path = self.src_data / 'val.txt'

    # 构建COCO格式目录
    self.dst = Path(self.src) / f"{Path(self.src_data).name}_COCO_format"
    self.coco_train = "train2017"
    self.coco_val = "val2017"
    self.coco_annotation = "annotations"
    self.coco_train_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_train}.json'
    self.coco_val_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_val}.json'

    mkdir(self.dst)
    mkdir(self.dst / self.coco_train)
    mkdir(self.dst / self.coco_val)
    mkdir(self.dst / self.coco_annotation)

    # 构建json内容结构
    self.type = 'instances'
    self.categories = []

    # 读取类别数
    self._get_category()

    self.info = {
        'year': 2021,
        'version': '1.0',
        'description': 'For object detection',
        'date_created': '2021',
    }

    self.licenses = [{
        'id': 1,
        'name': 'GNU General Public License v3.0',
        'url': 'https://github.com/zhiqwang/yolov5-rt-stack/blob/master/LICENSE',
    }]

def _get_category(self):
    class_list = read_txt(self.src_data / 'classes.txt')
    for i, category in enumerate(class_list, 1):
        self.categories.append({
            'id': i,
            'name': category,
            'supercategory': category,
        })

def generate(self):
    self.train_files = read_txt(self.train_txt_path)
    if Path(self.val_txt_path).exists():
        self.valid_files = read_txt(self.val_txt_path)

    train_dest_dir = Path(self.dst) / self.coco_train
    self.gen_dataset(self.train_files, train_dest_dir,
                     self.coco_train_json)

    val_dest_dir = Path(self.dst) / self.coco_val
    if Path(self.val_txt_path).exists():
        self.gen_dataset(self.valid_files, val_dest_dir,
                         self.coco_val_json)

    print(f"The output directory is: {str(self.dst)}")

def gen_dataset(self, img_paths, target_img_path, target_json):
    """
    https://cocodataset.org/#format-data

    """
    images = []
    annotations = []
    annotation_id = 1
    for img_id, img_path in enumerate(img_paths, 1):
        img_path = Path(img_path)

        if not img_path.exists():
            continue

        label_path = str(img_path.parent.parent
                         / 'labels' / f'{img_path.stem}.txt')

        imgsrc = cv.imread(str(img_path))
        height, width = imgsrc.shape[:2]

        dest_file_name = f'{img_id:012d}.jpg'
        save_img_path = target_img_path / dest_file_name

        if img_path.suffix.lower() == ".jpg":
            shutil.copyfile(img_path, save_img_path)
        else:
            cv.imwrite(str(save_img_path), imgsrc)

        images.append({
            'date_captured': '2021',
            'file_name': dest_file_name,
            'id': img_id,
            'height': height,
            'width': width,
        })

        if Path(label_path).exists():
            new_anno = self.read_annotation(label_path, img_id,
                                            height, width,
                                            annotation_id)
            if len(new_anno) > 0:
                annotations.extend(new_anno)
            else:
                raise ValueError(f'{label_path} is empty')
        else:
            raise FileExistsError(f'{label_path} not exists')

    json_data = {
        'info': self.info,
        'images': images,
        'licenses': self.licenses,
        'type': self.type,
        'annotations': annotations,
        'categories': self.categories,
    }
    with open(target_json, 'w', encoding='utf-8') as f:
        json.dump(json_data, f, ensure_ascii=False)

def read_annotation(self, txtfile, img_id,
                    height, width, annotation_id):
    annotation = []
    allinfo = read_txt(txtfile)
    for label_info in allinfo:
        label_info = label_info.split(" ")
        if len(label_info) < 5:
            continue

        category_id, vertex_info = label_info[0], label_info[1:]
        segmentation, bbox, area = self._get_annotation(vertex_info,
                                                        height, width)
        annotation.append({
            'segmentation': segmentation,
            'area': area,
            'iscrowd': 0,
            'image_id': img_id,
            'bbox': bbox,
            'category_id': int(category_id)+1,
            'id': annotation_id,
        })
        annotation_id += 1
    return annotation

@staticmethod
def _get_annotation(vertex_info, height, width):
    cx, cy, w, h = [float(i) for i in vertex_info]

    cx = cx * width
    cy = cy * height
    box_w = w * width
    box_h = h * height

    # left top
    x0 = max(cx - box_w / 2, 0)
    y0 = max(cy - box_h / 2, 0)

    # right bottomt
    x1 = min(x0 + box_w, width)
    y1 = min(y0 + box_h, height)

    segmentation = [[x0, y0, x1, y0, x1, y1, x0, y1]]
    bbox = [x0, y0, box_w, box_h]
    area = box_w * box_h
    return segmentation, bbox, area

if name == "main": parser = argparse.ArgumentParser('Datasets converter from YOLOV5 to COCO') parser.add_argument('--dir_path', type=str, default='datasets/tmp/YOLOV5', help='Dataset root path') args = parser.parse_args()

converter = YOLOV5ToCOCO(args.dir_path)
converter.generate()

` 是哪里有问题?

是不是id应该从0开始?

Robert-Hopkins commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

`# !/usr/bin/env python

-- encoding: utf-8 --

@file: yolov5_2_coco.py

@author: SWHL

@Contact: liekkaskono@163.com

import argparse from pathlib import Path import json import shutil

import cv2 as cv

def read_txt(txt_path): with open(str(txt_path), 'r', encoding='utf-8') as f: data = f.readlines() data = list(map(lambda x: x.rstrip('\n'), data)) return data

def mkdir(dir_path): Path(dir_path).mkdir(parents=True, exist_ok=True)

class YOLOV5ToCOCO(object): def init(self, dir_path): self.src_data = Path(dir_path) self.src = self.src_data.parent self.train_txt_path = self.src_data / 'train.txt' self.val_txt_path = self.src_data / 'val.txt'

    # 构建COCO格式目录
    self.dst = Path(self.src) / f"{Path(self.src_data).name}_COCO_format"
    self.coco_train = "train2017"
    self.coco_val = "val2017"
    self.coco_annotation = "annotations"
    self.coco_train_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_train}.json'
    self.coco_val_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_val}.json'

    mkdir(self.dst)
    mkdir(self.dst / self.coco_train)
    mkdir(self.dst / self.coco_val)
    mkdir(self.dst / self.coco_annotation)

    # 构建json内容结构
    self.type = 'instances'
    self.categories = []

    # 读取类别数
    self._get_category()

    self.info = {
        'year': 2021,
        'version': '1.0',
        'description': 'For object detection',
        'date_created': '2021',
    }

    self.licenses = [{
        'id': 1,
        'name': 'GNU General Public License v3.0',
        'url': 'https://github.com/zhiqwang/yolov5-rt-stack/blob/master/LICENSE',
    }]

def _get_category(self):
    class_list = read_txt(self.src_data / 'classes.txt')
    for i, category in enumerate(class_list, 1):
        self.categories.append({
            'id': i,
            'name': category,
            'supercategory': category,
        })

def generate(self):
    self.train_files = read_txt(self.train_txt_path)
    if Path(self.val_txt_path).exists():
        self.valid_files = read_txt(self.val_txt_path)

    train_dest_dir = Path(self.dst) / self.coco_train
    self.gen_dataset(self.train_files, train_dest_dir,
                     self.coco_train_json)

    val_dest_dir = Path(self.dst) / self.coco_val
    if Path(self.val_txt_path).exists():
        self.gen_dataset(self.valid_files, val_dest_dir,
                         self.coco_val_json)

    print(f"The output directory is: {str(self.dst)}")

def gen_dataset(self, img_paths, target_img_path, target_json):
    """
    https://cocodataset.org/#format-data

    """
    images = []
    annotations = []
    annotation_id = 1
    for img_id, img_path in enumerate(img_paths, 1):
        img_path = Path(img_path)

        if not img_path.exists():
            continue

        label_path = str(img_path.parent.parent
                         / 'labels' / f'{img_path.stem}.txt')

        imgsrc = cv.imread(str(img_path))
        height, width = imgsrc.shape[:2]

        dest_file_name = f'{img_id:012d}.jpg'
        save_img_path = target_img_path / dest_file_name

        if img_path.suffix.lower() == ".jpg":
            shutil.copyfile(img_path, save_img_path)
        else:
            cv.imwrite(str(save_img_path), imgsrc)

        images.append({
            'date_captured': '2021',
            'file_name': dest_file_name,
            'id': img_id,
            'height': height,
            'width': width,
        })

        if Path(label_path).exists():
            new_anno = self.read_annotation(label_path, img_id,
                                            height, width,
                                            annotation_id)
            if len(new_anno) > 0:
                annotations.extend(new_anno)
            else:
                raise ValueError(f'{label_path} is empty')
        else:
            raise FileExistsError(f'{label_path} not exists')

    json_data = {
        'info': self.info,
        'images': images,
        'licenses': self.licenses,
        'type': self.type,
        'annotations': annotations,
        'categories': self.categories,
    }
    with open(target_json, 'w', encoding='utf-8') as f:
        json.dump(json_data, f, ensure_ascii=False)

def read_annotation(self, txtfile, img_id,
                    height, width, annotation_id):
    annotation = []
    allinfo = read_txt(txtfile)
    for label_info in allinfo:
        label_info = label_info.split(" ")
        if len(label_info) < 5:
            continue

        category_id, vertex_info = label_info[0], label_info[1:]
        segmentation, bbox, area = self._get_annotation(vertex_info,
                                                        height, width)
        annotation.append({
            'segmentation': segmentation,
            'area': area,
            'iscrowd': 0,
            'image_id': img_id,
            'bbox': bbox,
            'category_id': int(category_id)+1,
            'id': annotation_id,
        })
        annotation_id += 1
    return annotation

@staticmethod
def _get_annotation(vertex_info, height, width):
    cx, cy, w, h = [float(i) for i in vertex_info]

    cx = cx * width
    cy = cy * height
    box_w = w * width
    box_h = h * height

    # left top
    x0 = max(cx - box_w / 2, 0)
    y0 = max(cy - box_h / 2, 0)

    # right bottomt
    x1 = min(x0 + box_w, width)
    y1 = min(y0 + box_h, height)

    segmentation = [[x0, y0, x1, y0, x1, y1, x0, y1]]
    bbox = [x0, y0, box_w, box_h]
    area = box_w * box_h
    return segmentation, bbox, area

if name == "main": parser = argparse.ArgumentParser('Datasets converter from YOLOV5 to COCO') parser.add_argument('--dir_path', type=str, default='datasets/tmp/YOLOV5', help='Dataset root path') args = parser.parse_args()

converter = YOLOV5ToCOCO(args.dir_path)
converter.generate()

` 是哪里有问题?YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

Robert-Hopkins commented 3 years ago

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

Robert-Hopkins commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

`# !/usr/bin/env python

-- encoding: utf-8 --

@file: yolov5_2_coco.py

@author: SWHL

@Contact: liekkaskono@163.com

import argparse from pathlib import Path import json import shutil

import cv2 as cv

def read_txt(txt_path): with open(str(txt_path), 'r', encoding='utf-8') as f: data = f.readlines() data = list(map(lambda x: x.rstrip('\n'), data)) return data

def mkdir(dir_path): Path(dir_path).mkdir(parents=True, exist_ok=True)

class YOLOV5ToCOCO(object): def init(self, dir_path): self.src_data = Path(dir_path) self.src = self.src_data.parent self.train_txt_path = self.src_data / 'train.txt' self.val_txt_path = self.src_data / 'val.txt'

    # 构建COCO格式目录
    self.dst = Path(self.src) / f"{Path(self.src_data).name}_COCO_format"
    self.coco_train = "train2017"
    self.coco_val = "val2017"
    self.coco_annotation = "annotations"
    self.coco_train_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_train}.json'
    self.coco_val_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_val}.json'

    mkdir(self.dst)
    mkdir(self.dst / self.coco_train)
    mkdir(self.dst / self.coco_val)
    mkdir(self.dst / self.coco_annotation)

    # 构建json内容结构
    self.type = 'instances'
    self.categories = []

    # 读取类别数
    self._get_category()

    self.info = {
        'year': 2021,
        'version': '1.0',
        'description': 'For object detection',
        'date_created': '2021',
    }

    self.licenses = [{
        'id': 1,
        'name': 'GNU General Public License v3.0',
        'url': 'https://github.com/zhiqwang/yolov5-rt-stack/blob/master/LICENSE',
    }]

def _get_category(self):
    class_list = read_txt(self.src_data / 'classes.txt')
    for i, category in enumerate(class_list, 1):
        self.categories.append({
            'id': i,
            'name': category,
            'supercategory': category,
        })

def generate(self):
    self.train_files = read_txt(self.train_txt_path)
    if Path(self.val_txt_path).exists():
        self.valid_files = read_txt(self.val_txt_path)

    train_dest_dir = Path(self.dst) / self.coco_train
    self.gen_dataset(self.train_files, train_dest_dir,
                     self.coco_train_json)

    val_dest_dir = Path(self.dst) / self.coco_val
    if Path(self.val_txt_path).exists():
        self.gen_dataset(self.valid_files, val_dest_dir,
                         self.coco_val_json)

    print(f"The output directory is: {str(self.dst)}")

def gen_dataset(self, img_paths, target_img_path, target_json):
    """
    https://cocodataset.org/#format-data

    """
    images = []
    annotations = []
    annotation_id = 1
    for img_id, img_path in enumerate(img_paths, 1):
        img_path = Path(img_path)

        if not img_path.exists():
            continue

        label_path = str(img_path.parent.parent
                         / 'labels' / f'{img_path.stem}.txt')

        imgsrc = cv.imread(str(img_path))
        height, width = imgsrc.shape[:2]

        dest_file_name = f'{img_id:012d}.jpg'
        save_img_path = target_img_path / dest_file_name

        if img_path.suffix.lower() == ".jpg":
            shutil.copyfile(img_path, save_img_path)
        else:
            cv.imwrite(str(save_img_path), imgsrc)

        images.append({
            'date_captured': '2021',
            'file_name': dest_file_name,
            'id': img_id,
            'height': height,
            'width': width,
        })

        if Path(label_path).exists():
            new_anno = self.read_annotation(label_path, img_id,
                                            height, width,
                                            annotation_id)
            if len(new_anno) > 0:
                annotations.extend(new_anno)
            else:
                raise ValueError(f'{label_path} is empty')
        else:
            raise FileExistsError(f'{label_path} not exists')

    json_data = {
        'info': self.info,
        'images': images,
        'licenses': self.licenses,
        'type': self.type,
        'annotations': annotations,
        'categories': self.categories,
    }
    with open(target_json, 'w', encoding='utf-8') as f:
        json.dump(json_data, f, ensure_ascii=False)

def read_annotation(self, txtfile, img_id,
                    height, width, annotation_id):
    annotation = []
    allinfo = read_txt(txtfile)
    for label_info in allinfo:
        label_info = label_info.split(" ")
        if len(label_info) < 5:
            continue

        category_id, vertex_info = label_info[0], label_info[1:]
        segmentation, bbox, area = self._get_annotation(vertex_info,
                                                        height, width)
        annotation.append({
            'segmentation': segmentation,
            'area': area,
            'iscrowd': 0,
            'image_id': img_id,
            'bbox': bbox,
            'category_id': int(category_id)+1,
            'id': annotation_id,
        })
        annotation_id += 1
    return annotation

@staticmethod
def _get_annotation(vertex_info, height, width):
    cx, cy, w, h = [float(i) for i in vertex_info]

    cx = cx * width
    cy = cy * height
    box_w = w * width
    box_h = h * height

    # left top
    x0 = max(cx - box_w / 2, 0)
    y0 = max(cy - box_h / 2, 0)

    # right bottomt
    x1 = min(x0 + box_w, width)
    y1 = min(y0 + box_h, height)

    segmentation = [[x0, y0, x1, y0, x1, y1, x0, y1]]
    bbox = [x0, y0, box_w, box_h]
    area = box_w * box_h
    return segmentation, bbox, area

if name == "main": parser = argparse.ArgumentParser('Datasets converter from YOLOV5 to COCO') parser.add_argument('--dir_path', type=str, default='datasets/tmp/YOLOV5', help='Dataset root path') args = parser.parse_args()

converter = YOLOV5ToCOCO(args.dir_path)
converter.generate()

` 是哪里有问题?

是不是id应该从0开始?

不是,他的id不唯一,每个图片的id都是从1开始的,所以有重复

Hezhexi2002 commented 3 years ago

我用同样的数据,yolo2voc后,ap不在等于0,但是调用作者给出链接的YOLO2COCO,仍旧不能用,所以,是否YOLO2COCO给出的json有问题?我对比了COCO的json,在key上给出的没有问题,有人测试过有不同的结果么?

我也是调用的作者给的yolo2coco然后ap一直为0

YOLO2COCO代码有个地方有问题,所以导致AP=0,ann里面的id不对,需要修改

`# !/usr/bin/env python

-- encoding: utf-8 --

@file: yolov5_2_coco.py

@author: SWHL

@Contact: liekkaskono@163.com

import argparse from pathlib import Path import json import shutil

import cv2 as cv

def read_txt(txt_path): with open(str(txt_path), 'r', encoding='utf-8') as f: data = f.readlines() data = list(map(lambda x: x.rstrip('\n'), data)) return data

def mkdir(dir_path): Path(dir_path).mkdir(parents=True, exist_ok=True)

class YOLOV5ToCOCO(object): def init(self, dir_path): self.src_data = Path(dir_path) self.src = self.src_data.parent self.train_txt_path = self.src_data / 'train.txt' self.val_txt_path = self.src_data / 'val.txt'

    # 构建COCO格式目录
    self.dst = Path(self.src) / f"{Path(self.src_data).name}_COCO_format"
    self.coco_train = "train2017"
    self.coco_val = "val2017"
    self.coco_annotation = "annotations"
    self.coco_train_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_train}.json'
    self.coco_val_json = self.dst / self.coco_annotation \
                               / f'instances_{self.coco_val}.json'

    mkdir(self.dst)
    mkdir(self.dst / self.coco_train)
    mkdir(self.dst / self.coco_val)
    mkdir(self.dst / self.coco_annotation)

    # 构建json内容结构
    self.type = 'instances'
    self.categories = []

    # 读取类别数
    self._get_category()

    self.info = {
        'year': 2021,
        'version': '1.0',
        'description': 'For object detection',
        'date_created': '2021',
    }

    self.licenses = [{
        'id': 1,
        'name': 'GNU General Public License v3.0',
        'url': 'https://github.com/zhiqwang/yolov5-rt-stack/blob/master/LICENSE',
    }]

def _get_category(self):
    class_list = read_txt(self.src_data / 'classes.txt')
    for i, category in enumerate(class_list, 1):
        self.categories.append({
            'id': i,
            'name': category,
            'supercategory': category,
        })

def generate(self):
    self.train_files = read_txt(self.train_txt_path)
    if Path(self.val_txt_path).exists():
        self.valid_files = read_txt(self.val_txt_path)

    train_dest_dir = Path(self.dst) / self.coco_train
    self.gen_dataset(self.train_files, train_dest_dir,
                     self.coco_train_json)

    val_dest_dir = Path(self.dst) / self.coco_val
    if Path(self.val_txt_path).exists():
        self.gen_dataset(self.valid_files, val_dest_dir,
                         self.coco_val_json)

    print(f"The output directory is: {str(self.dst)}")

def gen_dataset(self, img_paths, target_img_path, target_json):
    """
    https://cocodataset.org/#format-data

    """
    images = []
    annotations = []
    annotation_id = 1
    for img_id, img_path in enumerate(img_paths, 1):
        img_path = Path(img_path)

        if not img_path.exists():
            continue

        label_path = str(img_path.parent.parent
                         / 'labels' / f'{img_path.stem}.txt')

        imgsrc = cv.imread(str(img_path))
        height, width = imgsrc.shape[:2]

        dest_file_name = f'{img_id:012d}.jpg'
        save_img_path = target_img_path / dest_file_name

        if img_path.suffix.lower() == ".jpg":
            shutil.copyfile(img_path, save_img_path)
        else:
            cv.imwrite(str(save_img_path), imgsrc)

        images.append({
            'date_captured': '2021',
            'file_name': dest_file_name,
            'id': img_id,
            'height': height,
            'width': width,
        })

        if Path(label_path).exists():
            new_anno = self.read_annotation(label_path, img_id,
                                            height, width,
                                            annotation_id)
            if len(new_anno) > 0:
                annotations.extend(new_anno)
            else:
                raise ValueError(f'{label_path} is empty')
        else:
            raise FileExistsError(f'{label_path} not exists')

    json_data = {
        'info': self.info,
        'images': images,
        'licenses': self.licenses,
        'type': self.type,
        'annotations': annotations,
        'categories': self.categories,
    }
    with open(target_json, 'w', encoding='utf-8') as f:
        json.dump(json_data, f, ensure_ascii=False)

def read_annotation(self, txtfile, img_id,
                    height, width, annotation_id):
    annotation = []
    allinfo = read_txt(txtfile)
    for label_info in allinfo:
        label_info = label_info.split(" ")
        if len(label_info) < 5:
            continue

        category_id, vertex_info = label_info[0], label_info[1:]
        segmentation, bbox, area = self._get_annotation(vertex_info,
                                                        height, width)
        annotation.append({
            'segmentation': segmentation,
            'area': area,
            'iscrowd': 0,
            'image_id': img_id,
            'bbox': bbox,
            'category_id': int(category_id)+1,
            'id': annotation_id,
        })
        annotation_id += 1
    return annotation

@staticmethod
def _get_annotation(vertex_info, height, width):
    cx, cy, w, h = [float(i) for i in vertex_info]

    cx = cx * width
    cy = cy * height
    box_w = w * width
    box_h = h * height

    # left top
    x0 = max(cx - box_w / 2, 0)
    y0 = max(cy - box_h / 2, 0)

    # right bottomt
    x1 = min(x0 + box_w, width)
    y1 = min(y0 + box_h, height)

    segmentation = [[x0, y0, x1, y0, x1, y1, x0, y1]]
    bbox = [x0, y0, box_w, box_h]
    area = box_w * box_h
    return segmentation, bbox, area

if name == "main": parser = argparse.ArgumentParser('Datasets converter from YOLOV5 to COCO') parser.add_argument('--dir_path', type=str, default='datasets/tmp/YOLOV5', help='Dataset root path') args = parser.parse_args()

converter = YOLOV5ToCOCO(args.dir_path)
converter.generate()

` 是哪里有问题?

是不是id应该从0开始?

不是,他的id不唯一,每个图片的id都是从1开始的,所以有重复

所以你是怎样修改的呢?

songtf525 commented 3 years ago

针对YOLO2COCO的,id如果唯一,那就把 annotation_id = 1提升成全局变量,然后函数def read_annotation(self, txtfile, img_id, height, width), 把annotation_id去掉,调用时候也要去掉, 我改完试了一个数据集,it is work, very good,thankyou!

nTnZone commented 3 years ago

I have met same problem, but my datasets are in VOC format.

2021-08-16 19:13:27 | INFO     | yolox.evaluators.voc_evaluator:160 - Evaluate in main process...
Writing stain VOC results file
Writing flaw VOC results file
Writing burn VOC results file
Eval IoU : 0.50
AP for stain = 0.0000
AP for flaw = 0.0000
AP for burn = 0.0000
Mean AP = 0.0000
~~~~~~~~
Results:
0.000
0.000
0.000
0.000
~~~~~~~~

2021-08-16 19:13:27 | INFO     | yolox.core.trainer:321 - 
Average forward time: 0.00 ms, Average NMS time: 0.00 ms, Average inference time: 0.00 ms

顺便问一下你解决了这个问题吗?我的数据在YOLO5上是可以训练的

还没有:-(,但是voc格式mAP为0已经有解决方案了,你在issue可以找到

我解决了,应该是路径没设置正确。相比yolo5,这个得配置很多地方,有其实修改自定义数据集的路径,删除多余的信息如self.year之类的

Hezhexi2002 commented 3 years ago

针对YOLO2COCO的,id如果唯一,那就把 annotation_id = 1提升成全局变量,然后函数def read_annotation(self, txtfile, img_id, height, width), 把annotation_id去掉,调用时候也要去掉, 我改完试了一个数据集,it is work, very good,thankyou!

Hezhexi2002 commented 3 years ago

针对YOLO2COCO的,id如果唯一,那就把 annotation_id = 1提升成全局变量,然后函数def read_annotation(self, txtfile, img_id, height, width), 把annotation_id去掉,调用时候也要去掉, 我改完试了一个数据集,it is work, very good,thankyou!

mmexport1629283400046.png我也试了确实有效,太感谢了!!!😭