Open ponjoru opened 7 months ago
The training configuration for DEYO is kept consistent on the CrowdHuman as it is on COCO. Considering that RT-DETR lacks a pre-trained model on CrowdHuman, we have not included it in our comparison. Since the hyperparameters have not been adjusted, the models obtained might not be optimal, whether it is during the first stage of DEYO training or the second stage of DEYO training. Therefore, we currently have no plans to share the pre-trained weights. However, you can easily reproduce these results by simply adjusting the number of classes (nc=1).
Hello, may I ask if the following result can be reproduced by using the following settings with single GPU?
Model Strategy Epochs AP50
DEYO-N Step-by-step 72 83.0
model = RTDETR("yolov8-rtdetr.yaml")
model.load("yolov8n.pt")
model.train(data = "crowdhuman.yaml", epochs = 72, lr0 = 0.0001, lrf = 0.0001, weight_decay = 0.0001, optimizer = 'AdamW', warmup_epochs = 0, mosaic = 1.0, close_mosaic = 24)
The experiments were conducted on a single GPU. The evaluation metrics for CrowdHuman differ from those for COCO; we used the code provided by Iter-Deformable-DETR for our experiments.
Thank you. I have managed to reproduce the result of 0.83 when threshold is 0.3.
Thank you for a great work!