open-mmlab / mmrotate

OpenMMLab Rotated Object Detection Toolbox and Benchmark
https://mmrotate.readthedocs.io/en/latest/
Apache License 2.0
1.81k stars 523 forks source link

[Bug] DOTA Evaluation Results Feedback of Task1 is obviously lower than the official result #1013

Open jimuIee opened 4 months ago

jimuIee commented 4 months ago

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

master branch https://github.com/open-mmlab/mmrotate

Environment

sys.platform: linux Python: 3.7.0 (default, Oct 9 2018, 10:31:47) [GCC 7.3.0] CUDA available: True GPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 3090 CUDA_HOME: /usr/local/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.109 GCC: gcc (Ubuntu 5.5.0-12ubuntu1) 5.5.0 20171010 PyTorch: 1.11.0+cu113 PyTorch compiling details: PyTorch built with:

TorchVision: 0.12.0+cu113 OpenCV: 4.9.0 MMCV: 1.5.3 MMCV Compiler: GCC 7.3 MMCV CUDA Compiler: 11.3 MMRotate: 0.3.4+9ea1aee

Reproduces the problem - code sample

python tools/train.py configs/rotated_fcos/rotated_fcos_r50_fpn_1x_dota_le90.py

Reproduces the problem - command or script

python tools/test.py configs/rotated_fcos/rotated_fcos_r50_fpn_1x_dota_le90.py work_dirs/rotated_fcos_r50_fpn_1x_dota_le90/latest.pth --format-only --eval-options submission_dir=task1

Reproduces the problem - error message

The detection accuracy of FCOS on the DOTA dataset is much lower than the model provided by model zoo

Additional information

This is your evaluation result for task 1 (VOC metrics):

mAP: 0.42395437677254383 ap of each class: plane:0.5423819423982649, baseball-diamond:0.5285280870317047, bridge:0.20667867900155412, ground-track-field:0.4569022199523975, small-vehicle:0.22461535065045377, large-vehicle:0.08442215012610903, ship:0.17547907678219063, tennis-court:0.6117272119996067, basketball-court:0.6934436730389884, storage-tank:0.8345323594333807, soccer-ball-field:0.36092004275259465, roundabout:0.6534027091402538, harbor:0.247273973001902, swimming-pool:0.49025477053589234, helicopter:0.24875340574286495

i have checked the config and confirmed the file is same as the official config, and the method of split is same, i attached my log file 20240321_214044.log

Aocide119 commented 4 months ago

Maybe you didn't use the augumented data. By the way we encountered an issue with the DOTA V1.0 evaluation server, we can't link to the below links. image

jimuIee commented 4 months ago

Maybe you didn't use the augumented data. By the way we encountered an issue with the DOTA V1.0 evaluation server, we can't link to the below links. image

i have not meet this issue , and how to use the augumented data?

Aocide119 commented 4 months ago

Like muti-scale training and some test tricks. Or you can paste your config here for more details.

Our team has been experiencing issues with connecting to the evaluation server as i mentioned above. I would like to know if you can send me the evaluation URL or give me some kind of help with connecting.

------------------ 原始邮件 ------------------ 发件人: "open-mmlab/mmrotate" @.>; 发送时间: 2024年3月23日(星期六) 中午1:45 @.>; @.**@.>; 主题: Re: [open-mmlab/mmrotate] [Bug] DOTA Evaluation Results Feedback of Task1 is obviously lower than the official result (Issue #1013)

Maybe you didn't use the augumented data. By the way we encountered an issue with the DOTA V1.0 evaluation server, we can't link to the below links.

i have not meet this issue , and how to use the augumented data?

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

xxxyyynnn commented 4 months ago

@jimuIee Have you solved this problem? I have the same problem as you, could you please help me?