Cuogeihong / CEASC

The official implementation of CEASC
Apache License 2.0
106 stars 13 forks source link

GFL v1 reimplementation performance question. #2

Closed johnran103 closed 1 year ago

johnran103 commented 1 year ago

I used your config baseline_gfl_res18_visdrone.py without change. The result is below(tested with matlab 2021a). 截屏2023-03-30 19 05 59

which is far below your paper result AP 28.4, AP50 50.0. 截屏2023-03-30 19 01 38

What is the possible reseason? Could you give me your checkpoint or possible explanation, please!

Cuogeihong commented 1 year ago

Hello, please provide me with the mmdetection training log. By the way, the checkpoints has been updated.

johnran103 commented 1 year ago

20230329_125746.log Hello, this is my training log. Thank you !

johnran103 commented 1 year ago

By the way, these two result come from your checkpoint. First, test with defualt coco scripts. 截屏2023-03-31 16 57 43 Second, test with ignored region. 截屏2023-03-31 16 57 55

Cuogeihong commented 1 year ago

20230329_125746.log Hello, this is my training log. Thank you !

it seems the training phase works fine, and the post-process also works in my computer

image

so you could check the dataset format following UFPMP-Det and check the official val script or check out the path in https://github.com/Cuogeihong/CEASC/blob/6546f7bb945b6e4579b5b54574b2fb4f417be2e5/tools/json_to_txt.py#L14

By the way, it seems that all your classes' AP are lower than mine (using my checkpoint):

class1: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 25.02%.
class2: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 15.54%.
class3: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 11.45%.
class4: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 59.02%.
class5: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 37.53%.
class6: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 28.76%.
class7: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 19.67%.
class8: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 11.65%.
class9: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 45.94%.
class10: Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 25.61%.
Evaluation Completed. The peformance of the detector is presented as follows.
Average Precision  (AP) @[ IoU=0.50:0.95 | maxDets=500 ] = 28.37%.
Cuogeihong commented 1 year ago

By the way, these two result come from your checkpoint. First, test with defualt coco scripts. 截屏2023-03-31 16 57 43 Second, test with ignored region. 截屏2023-03-31 16 57 55

Also, here is my test with defualt coco scripts. image I think it is probably because of the wrong dataset format

johnran103 commented 1 year ago

Here is my pipeline to prepare & test visdrone dataset:

  1. Download visdrone dataset from from its official website.
  2. Using the script from UFPMP-Det python UFPMP-Det-Tools/build_dataset/VisDrone2COCO.py xxx xxx xxx/instances_UAVval_v1.json
  3. Test. CUDA_VISIBLE_DEVICES=1 python ./tools/test.py ./configs/UAV/baseline_gfl_res18_visdrone.py ./epoch_15.pth --eval bbox --out ./result.pkl python tools/vis_pkl.py --pkl_pathname ./result.pkl --json_pathname ./result.json python tools/json_to_txt.py --json_pathname ./result.json matlab -r evalDET The defualt coco results are as follow. But matlab test results are not changed. 截屏2023-04-03 15 14 27
johnran103 commented 1 year ago

I tested your ceasc model. Which is fine. 截屏2023-04-03 15 46 40

Cuogeihong commented 1 year ago

I have fixed a bug in gfl baseline, you can test it again. By the way, matlab test results not changed while coco changed seems strange, maybe you used a wrong pred_txt/ folder for test?

johnran103 commented 1 year ago

Ufter using your fixed code and checkpoint, I can get your performance now.

Greatxcw commented 1 year ago

By the way, these two result come from your checkpoint. First, test with defualt coco scripts. 截屏2023-03-31 16 57 43 Second, test with ignored region. 截屏2023-03-31 16 57 55

Hello, I am also reproducing the code ,could tell me how to test with ignored region,please!

johnran103 commented 1 year ago

Use the official MATLAB code provided by the VisDrone team.

---- Replied Message ---- | From | @.> | | Date | 10/13/2023 21:43 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [Cuogeihong/CEASC] GFL v1 reimplementation performance question. (Issue #2) |

By the way, these two result come from your checkpoint. First, test with defualt coco scripts. Second, test with ignored region.

Hello, I am also reproducing the code ,could tell me how to test with ignored region,please!

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

Greatxcw commented 1 year ago

Use the official MATLAB code provided by the VisDrone team. ---- Replied Message ---- | From | @.> | | Date | 10/13/2023 21:43 | | To | @.> | | Cc | @.>@.> | | Subject | Re: [Cuogeihong/CEASC] GFL v1 reimplementation performance question. (Issue #2) | By the way, these two result come from your checkpoint. First, test with defualt coco scripts. Second, test with ignored region. Hello, I am also reproducing the code ,could tell me how to test with ignored region,please! — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***> Got it,thank you very much!

QingfanHou commented 1 year ago

Here is my pipeline to prepare & test visdrone dataset:

  1. Download visdrone dataset from from its official website.
  2. Using the script from UFPMP-Det python UFPMP-Det-Tools/build_dataset/VisDrone2COCO.py xxx xxx xxx/instances_UAVval_v1.json
  3. Test. CUDA_VISIBLE_DEVICES=1 python ./tools/test.py ./configs/UAV/baseline_gfl_res18_visdrone.py ./epoch_15.pth --eval bbox --out ./result.pkl python tools/vis_pkl.py --pkl_pathname ./result.pkl --json_pathname ./result.json python tools/json_to_txt.py --json_pathname ./result.json matlab -r evalDET The defualt coco results are as follow. But matlab test results are not changed. 截屏2023-04-03 15 14 27 I encountered the same problem as you. Were you able to achieve results similar to the author's in your subsequent training? After processing the data as you mentioned above, the results I trained myself had an mAP of around 0.25 when validated on mmdet.