Closed augmentedstartups closed 2 years ago
For yolo-r, I couldn't find a clear model for a fair comparison. Maybe the following model to compare with YOLOX-L (50.0% AP)?
Model | Test Size | APval | AP50val | AP75val | APSval | APMval | APLval | batch1 throughput |
---|---|---|---|---|---|---|---|---|
YOLOv4-CSP | 640 | 49.1% | 67.7% | 53.8% | 32.1% | 54.4% | 63.2% | 76 fps |
YOLOR-CSP | 640 | 49.2% | 67.6% | 53.7% | 32.9% | 54.4% | 63.0% | - |
But I also notice that its training scripts seem to train 1200 epochs (300+450+450) as:
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights '' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6 --hyp hyp.scratch.1280.yaml --epochs 300
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 tune.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights 'runs/train/yolor_p6/weights/last_298.pt' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6-tune --hyp hyp.finetune.1280.yaml --epochs 450
python -m torch.distributed.launch --nproc_per_node 8 --master_port 9527 train.py --batch-size 64 --img 1280 1280 --data data/coco.yaml --cfg cfg/yolor_p6.cfg --weights 'runs/train/yolor_p6-tune/weights/epoch_424.pt' --device 0,1,2,3,4,5,6,7 --sync-bn --name yolor_p6-fine --hyp hyp.finetune.1280.yaml --epochs 450
while yolox models are trained with 300 epochs.
For larger scale model (1280x1280 input), we didn't cover it yet. And yolo-r did a good work.
Hello,
Model | Test Size | APval | AP50val | AP75val | APSval | APMval | APLval | batch1 throughput |
---|---|---|---|---|---|---|---|---|
**YOLOR-CSP*** | 640 | 50.0% | 68.7% | 54.3% | 34.2% | 55.1% | 64.3% | 76 fps |
**YOLOR-CSP-X*** | 640 | 51.5% | 69.9% | 56.1% | 35.8% | 56.8% | 66.1% | 53 fps |
We convert these two models into darknet weights, and batch1 throughput includes pre/post-processing. You could find them in https://github.com/AlexeyAB/darknet#pre-trained-models , which are yolov4-csp-swish and yolov4-csp-x-swish.
By the way, we follow the training schedule of Scaled-YOLOv4, so YOLOR-P5 models are trained 300 epochs. The training scripts of YOLOR-P6 models use resume training, so they are totally trained 450 epochs. We will release our new training script of YOLOR-P6 models which get about 0.6% better AP with 300 epochs training.
@WongKinYiu Great! And we hope some techniques in our YOLOX may be helpful for YOLOR too!
Yes, I have notice your OTA work for a long time. In these days, I have finish implemented decoupled head, anchor-free, and multi positives in our PyTorch version. And keep try hard to implement dynamic top-k assignment.
Can you explain the advantages and disadvantages of YOLOX and YOLOR when comparing?
Maybe combine the 2 to get YOLO XR or YOLO RX 🤣🤣
From Papers with Code it seems that YOLOR is better in mAP - https://paperswithcode.com/sota/object-detection-on-coco?tag_filter=15
+ Transformer = RTX
- Transformer = RTX
🤣🤣
This may be a bit unrelated but Iv posted a demo of YOLOR + DeepSORT Tracking https://youtu.be/keXpp8FhORI
Currently our YOLOR supports classification/detection/segmentation/embedding/tracking/reconstruction/landmark_detection in one unified model.
I see the example to run detection. How do I run it for segmentation, classification, etc?
@ruinmessi Hello, After apply simOTA on yolor-csp, it finally increase about 1% AP on mscoco object detection task. Thanks for your great work.
Train from scratch for 300 epochs: | Model | SimOTA | Test Size | APval |
---|---|---|---|---|
YOLOR-CSP | no | 640 | 50.0% | |
YOLOR-CSP | yes | 640 | 51.0% |
How does YOLOX compare to YOLOR by Wong Kin Yiu? In terms of speed and accuracy.