YuHengsss / YOLOV

This repo is an implementation of PyTorch version YOLOV Series
Apache License 2.0
283 stars 39 forks source link

Pre-training with DET and VID dataset #13

Closed mukundkhanna123 closed 1 year ago

mukundkhanna123 commented 1 year ago

Hey, I had a question about what training methodology was used to pre-train the yolox baseline models. I added the images from DET dataset with similar classes as the VID dataset and trained yolox-s model but I was not able to replicate your results. So could you elaborate on how you pretrained the yolox model?

YuHengsss commented 1 year ago

Hey, I had a question about what training methodology was used to pre-train the yolox baseline models. I added the images from DET dataset with similar classes as the VID dataset and trained yolox-s model but I was not able to replicate your results. So could you elaborate on how you pretrained the yolox model?

Thanks for your attention. May I ask how many images you sampled from the VID dataset and can you please share the training log for debugging? We release our training annotation link for the baseline model in the README.MD file which contains 1/10 VID images and all DET images with the same classes. You can try this one and we are eager for your response.

YuHengsss commented 1 year ago

Besides, do you use the coco pre-trained model for finetuning?

mukundkhanna123 commented 1 year ago

I sampled all the images from VID dataset and all the images from the DET dataset that has the same classes. Yes I used coco pre-trained model for finetuning. I am getting an mAP score of around 0.56 when validating on the entire VID dataset. these are the training logs ╒══════════════════╤═══════════════════════════════╕ │ keys │ values │ ╞══════════════════╪═══════════════════════════════╡ │ seed │ None │ ├──────────────────┼───────────────────────────────┤ │ output_dir │ './imagenet_det_vid_baseline' │ ├──────────────────┼───────────────────────────────┤ │ print_interval │ 20 │ ├──────────────────┼───────────────────────────────┤ │ eval_interval │ 1 │ ├──────────────────┼───────────────────────────────┤ │ num_classes │ 30 │ ├──────────────────┼───────────────────────────────┤ │ depth │ 0.33 │ ├──────────────────┼───────────────────────────────┤ │ width │ 0.5 │ ├──────────────────┼───────────────────────────────┤ │ data_num_workers │ 4 │ ├──────────────────┼───────────────────────────────┤ │ input_size │ (576, 576) │ ├──────────────────┼───────────────────────────────┤ │ random_size │ (18, 32) │ ├──────────────────┼───────────────────────────────┤ │ train_ann │ 'vid_det_train.json' │ ├──────────────────┼───────────────────────────────┤ │ val_ann │ 'vid_det_val.json' │ ├──────────────────┼───────────────────────────────┤ │ degrees │ 10.0 │ ├──────────────────┼───────────────────────────────┤ │ translate │ 0.1 │ ├──────────────────┼───────────────────────────────┤ │ scale │ (0.1, 2) │ ├──────────────────┼───────────────────────────────┤ │ mscale │ (0.8, 1.6) │ ├──────────────────┼───────────────────────────────┤ │ shear │ 2.0 │ ├──────────────────┼───────────────────────────────┤ │ perspective │ 0.0 │ ├──────────────────┼───────────────────────────────┤ │ enable_mixup │ True │ ├──────────────────┼───────────────────────────────┤ │ warmup_epochs │ 1 │ ├──────────────────┼───────────────────────────────┤ │ max_epoch │ 30 │ ├──────────────────┼───────────────────────────────┤ │ warmup_lr │ 0 │ ├──────────────────┼───────────────────────────────┤ │ basic_lr_per_img │ 1.5625e-05 │ ├──────────────────┼───────────────────────────────┤ │ scheduler │ 'yoloxwarmcos' │ ├──────────────────┼───────────────────────────────┤ │ no_aug_epochs │ 2 │ ├──────────────────┼───────────────────────────────┤ │ min_lr_ratio │ 0.05 │ ├──────────────────┼───────────────────────────────┤ │ ema │ True │ ├──────────────────┼───────────────────────────────┤ │ weight_decay │ 0.0005 │ ├──────────────────┼───────────────────────────────┤ │ momentum │ 0.9 │ ├──────────────────┼───────────────────────────────┤ │ exp_name │ 'yolox_s_mix_det' │ ├──────────────────┼───────────────────────────────┤ │ test_size │ (576, 576) │ ├──────────────────┼───────────────────────────────┤ │ test_conf │ 0.001 │ ├──────────────────┼───────────────────────────────┤ │ nmsthre │ 0.6 │ ╘══════════════════╧═══════════════════════════════╛

YuHengsss commented 1 year ago

sampled all the images from VID dat

Please follow our instructions and try it again.

mukundkhanna123 commented 1 year ago

My bad, I used 1/10 from VID dataset and all images from DET dataset, sorry that was a typo

YuHengsss commented 1 year ago

My bad, I used 1/10 from VID dataset and all images from DET dataset, sorry that was a typo

Could you please use our annotations and try it again and keep all of other settings the same as this repo. We find that aug setting in this log has been changed (e.g. 30 epochs are too much for the small model, we use 7 by default). Hopefully, we can find out the reason.

Yipzcc commented 1 year ago

@mukundkhanna123 Can you give me your small dataset from VID, the VID is hard for us to download .Thank you!!!