facebookresearch / unbiased-teacher

PyTorch code for ICLR 2021 paper Unbiased Teacher for Semi-Supervised Object Detection
https://arxiv.org/abs/2102.09480
MIT License
409 stars 84 forks source link

About verification, the accuracy is 0 #78

Open mary-0830 opened 1 year ago

mary-0830 commented 1 year ago

Hello, when I execute the verification command, the accuracy is 0. But in the process of training, there are scores in the verification. What's the matter?

command: python train_net.py --eval-only --num-gpus 8 --config configs/Faster-RCNN/coco-standard/faster_rcnn_R_50_FPN_ut2_sup10_run0.yaml --dist-url tcp://127.0.0.1:50158 MODEL.WEIGHTS output/model_0164999.pth SOLVER.IMG_PER_BATCH_LABEL 8 SOLVER.IMG_PER_BATCH_UNLABEL 8

outputs:

[07/25 10:04:27 d2.evaluation.evaluator]: Inference done 27/625. Dataloading: 0.0290 s/iter. Inference: 0.1645 s/iter. Eval: 0.0001 s/iter. Total: 0.1937 s/iter. ETA=0:01:55 [07/25 10:04:32 d2.evaluation.evaluator]: Inference done 50/625. Dataloading: 0.0222 s/iter. Inference: 0.1849 s/iter. Eval: 0.0001 s/iter. Total: 0.2073 s/iter. ETA=0:01:59 [07/25 10:04:37 d2.evaluation.evaluator]: Inference done 75/625. Dataloading: 0.0198 s/iter. Inference: 0.1888 s/iter. Eval: 0.0001 s/iter. Total: 0.2094 s/iter. ETA=0:01:55 [07/25 10:04:42 d2.evaluation.evaluator]: Inference done 97/625. Dataloading: 0.0200 s/iter. Inference: 0.1936 s/iter. Eval: 0.0001 s/iter. Total: 0.2142 s/iter. ETA=0:01:53 [07/25 10:04:47 d2.evaluation.evaluator]: Inference done 123/625. Dataloading: 0.0206 s/iter. Inference: 0.1893 s/iter. Eval: 0.0001 s/iter. Total: 0.2105 s/iter. ETA=0:01:45 [07/25 10:04:52 d2.evaluation.evaluator]: Inference done 150/625. Dataloading: 0.0207 s/iter. Inference: 0.1850 s/iter. Eval: 0.0001 s/iter. Total: 0.2063 s/iter. ETA=0:01:37 [07/25 10:04:57 d2.evaluation.evaluator]: Inference done 178/625. Dataloading: 0.0200 s/iter. Inference: 0.1815 s/iter. Eval: 0.0002 s/iter. Total: 0.2026 s/iter. ETA=0:01:30 [07/25 10:05:02 d2.evaluation.evaluator]: Inference done 202/625. Dataloading: 0.0203 s/iter. Inference: 0.1817 s/iter. Eval: 0.0004 s/iter. Total: 0.2033 s/iter. ETA=0:01:25 [07/25 10:05:08 d2.evaluation.evaluator]: Inference done 229/625. Dataloading: 0.0209 s/iter. Inference: 0.1794 s/iter. Eval: 0.0005 s/iter. Total: 0.2016 s/iter. ETA=0:01:19 [07/25 10:05:13 d2.evaluation.evaluator]: Inference done 251/625. Dataloading: 0.0208 s/iter. Inference: 0.1830 s/iter. Eval: 0.0004 s/iter. Total: 0.2050 s/iter. ETA=0:01:16 [07/25 10:05:18 d2.evaluation.evaluator]: Inference done 276/625. Dataloading: 0.0204 s/iter. Inference: 0.1834 s/iter. Eval: 0.0005 s/iter. Total: 0.2051 s/iter. ETA=0:01:11 [07/25 10:05:23 d2.evaluation.evaluator]: Inference done 299/625. Dataloading: 0.0207 s/iter. Inference: 0.1843 s/iter. Eval: 0.0005 s/iter. Total: 0.2063 s/iter. ETA=0:01:07 [07/25 10:05:28 d2.evaluation.evaluator]: Inference done 326/625. Dataloading: 0.0211 s/iter. Inference: 0.1827 s/iter. Eval: 0.0005 s/iter. Total: 0.2050 s/iter. ETA=0:01:01 [07/25 10:05:33 d2.evaluation.evaluator]: Inference done 350/625. Dataloading: 0.0213 s/iter. Inference: 0.1836 s/iter. Eval: 0.0005 s/iter. Total: 0.2060 s/iter. ETA=0:00:56 [07/25 10:05:39 d2.evaluation.evaluator]: Inference done 374/625. Dataloading: 0.0212 s/iter. Inference: 0.1843 s/iter. Eval: 0.0005 s/iter. Total: 0.2066 s/iter. ETA=0:00:51 [07/25 10:05:44 d2.evaluation.evaluator]: Inference done 398/625. Dataloading: 0.0212 s/iter. Inference: 0.1847 s/iter. Eval: 0.0004 s/iter. Total: 0.2070 s/iter. ETA=0:00:46 [07/25 10:05:49 d2.evaluation.evaluator]: Inference done 425/625. Dataloading: 0.0214 s/iter. Inference: 0.1835 s/iter. Eval: 0.0004 s/iter. Total: 0.2060 s/iter. ETA=0:00:41 [07/25 10:05:54 d2.evaluation.evaluator]: Inference done 455/625. Dataloading: 0.0213 s/iter. Inference: 0.1811 s/iter. Eval: 0.0004 s/iter. Total: 0.2035 s/iter. ETA=0:00:34 [07/25 10:05:59 d2.evaluation.evaluator]: Inference done 485/625. Dataloading: 0.0216 s/iter. Inference: 0.1789 s/iter. Eval: 0.0004 s/iter. Total: 0.2015 s/iter. ETA=0:00:28 [07/25 10:06:04 d2.evaluation.evaluator]: Inference done 509/625. Dataloading: 0.0216 s/iter. Inference: 0.1793 s/iter. Eval: 0.0004 s/iter. Total: 0.2018 s/iter. ETA=0:00:23 [07/25 10:06:09 d2.evaluation.evaluator]: Inference done 529/625. Dataloading: 0.0219 s/iter. Inference: 0.1809 s/iter. Eval: 0.0004 s/iter. Total: 0.2038 s/iter. ETA=0:00:19 [07/25 10:06:14 d2.evaluation.evaluator]: Inference done 554/625. Dataloading: 0.0220 s/iter. Inference: 0.1810 s/iter. Eval: 0.0004 s/iter. Total: 0.2040 s/iter. ETA=0:00:14 [07/25 10:06:20 d2.evaluation.evaluator]: Inference done 577/625. Dataloading: 0.0222 s/iter. Inference: 0.1816 s/iter. Eval: 0.0004 s/iter. Total: 0.2047 s/iter. ETA=0:00:09 [07/25 10:06:25 d2.evaluation.evaluator]: Inference done 606/625. Dataloading: 0.0223 s/iter. Inference: 0.1802 s/iter. Eval: 0.0004 s/iter. Total: 0.2034 s/iter. ETA=0:00:03 [07/25 10:06:27 d2.evaluation.evaluator]: Total inference time: 0:02:05.097037 (0.201769 s / iter per device, on 8 devices) [07/25 10:06:27 d2.evaluation.evaluator]: Total inference pure compute time: 0:01:49 (0.177313 s / iter per device, on 8 devices) [07/25 10:06:33 d2.evaluation.coco_evaluation]: Preparing results for COCO format ... [07/25 10:06:33 d2.evaluation.coco_evaluation]: Saving results to ./output/inference/coco_instances_results.json [07/25 10:06:33 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... Loading and preparing results... DONE (t=0.05s) creating index... index created! [07/25 10:06:34 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox [07/25 10:06:49 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 15.41 seconds. [07/25 10:06:49 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [07/25 10:06:51 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 1.58 seconds. Average Precision (AP) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.75 area= all maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.000 Average Precision (AP) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 1 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets= 10 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area= all maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area= small maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area=medium maxDets=100 ] = 0.000 Average Recall (AR) @[ IoU=0.50:0.95 area= large maxDets=100 ] = 0.000 [07/25 10:06:51 d2.evaluation.coco_evaluation]: Evaluation results for bbox: AP AP50 AP75 APs APm APl
0.000 0.000 0.000 0.000 0.000 0.000
[07/25 10:06:51 d2.evaluation.coco_evaluation]: Per-category bbox AP: category AP category AP category AP
person 0.000 bicycle 0.000 car 0.000
motorcycle 0.000 airplane 0.000 bus 0.000
train 0.000 truck 0.000 boat 0.000
traffic light 0.000 fire hydrant 0.000 stop sign 0.000
parking meter 0.000 bench 0.000 bird 0.000
cat 0.000 dog 0.000 horse 0.000
sheep 0.000 cow 0.000 elephant 0.000
bear 0.000 zebra 0.000 giraffe 0.000
backpack 0.000 umbrella 0.000 handbag 0.000
tie 0.000 suitcase 0.000 frisbee 0.000
skis 0.000 snowboard 0.000 sports ball 0.000
kite 0.000 baseball bat 0.000 baseball glove 0.000
skateboard 0.000 surfboard 0.000 tennis racket 0.000
bottle 0.000 wine glass 0.000 cup 0.000
fork 0.000 knife 0.000 spoon 0.000
bowl 0.000 banana 0.000 apple 0.000
sandwich 0.000 orange 0.000 broccoli 0.000
carrot 0.000 hot dog 0.000 pizza 0.016
donut 0.000 cake 0.000 chair 0.000
couch 0.000 potted plant 0.000 bed 0.000
dining table 0.000 toilet 0.000 tv 0.000
laptop 0.000 mouse 0.000 remote 0.000
keyboard 0.000 cell phone 0.000 microwave 0.000
oven 0.000 toaster 0.000 sink 0.000
refrigerator 0.000 book 0.000 clock 0.000
vase 0.000 scissors 0.000 teddy bear 0.000
hair drier 0.000 toothbrush 0.000

[07/25 10:06:52 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format: [07/25 10:06:52 d2.evaluation.testing]: copypaste: Task: bbox [07/25 10:06:52 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [07/25 10:06:52 d2.evaluation.testing]: copypaste: 0.0002,0.0003,0.0003,0.0000,0.0000,0.0002

wmjlincy commented 1 year ago

I'm having the same problem now, did you fix it now?

Tajamul21 commented 10 months ago

i am also having the same problem

[08/25 12:39:36] d2.engine.defaults INFO: Evaluation results for Clipart1k_test in csv format: [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: Task: bbox [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75 [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: 10.3707,21.6129,8.3406 [08/25 12:39:36] d2.data.common INFO: Serializing 500 elements to byte tensors and concatenating them all ... [08/25 12:39:36] d2.data.common INFO: Serialized dataset takes 0.23 MiB [08/25 12:39:36] d2.data.dataset_mapper INFO: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')] [08/25 12:39:36] d2.evaluation.evaluator INFO: Start inference on 250 images [08/25 12:39:41] d2.evaluation.evaluator INFO: Inference done 11/250. 0.0293 s / img. ETA=0:00:07 [08/25 12:39:46] d2.evaluation.evaluator INFO: Inference done 156/250. 0.0313 s / img. ETA=0:00:03 [08/25 12:39:50] d2.evaluation.evaluator INFO: Total inference time: 0:00:08.974351 (0.036630 s / img per device, on 2 devices) [08/25 12:39:50] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:00:07 (0.031380 s / img per device, on 2 devices) [08/25 12:39:51] d2.engine.defaults INFO: Evaluation results for Clipart1k_test in csv format: [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: Task: bbox [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75 [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: 0.0000,0.0000,0.0000

when i do eval only i get 0 AP, the teacher AP is zero, any fixes?

CoderZhangYx commented 9 months ago

i am also having the same problem

[08/25 12:39:36] d2.engine.defaults INFO: Evaluation results for Clipart1k_test in csv format: [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: Task: bbox [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75 [08/25 12:39:36] d2.evaluation.testing INFO: copypaste: 10.3707,21.6129,8.3406 [08/25 12:39:36] d2.data.common INFO: Serializing 500 elements to byte tensors and concatenating them all ... [08/25 12:39:36] d2.data.common INFO: Serialized dataset takes 0.23 MiB [08/25 12:39:36] d2.data.dataset_mapper INFO: [DatasetMapper] Augmentations used in inference: [ResizeShortestEdge(short_edge_length=(800, 800), max_size=1333, sample_style='choice')] [08/25 12:39:36] d2.evaluation.evaluator INFO: Start inference on 250 images [08/25 12:39:41] d2.evaluation.evaluator INFO: Inference done 11/250. 0.0293 s / img. ETA=0:00:07 [08/25 12:39:46] d2.evaluation.evaluator INFO: Inference done 156/250. 0.0313 s / img. ETA=0:00:03 [08/25 12:39:50] d2.evaluation.evaluator INFO: Total inference time: 0:00:08.974351 (0.036630 s / img per device, on 2 devices) [08/25 12:39:50] d2.evaluation.evaluator INFO: Total inference pure compute time: 0:00:07 (0.031380 s / img per device, on 2 devices) [08/25 12:39:51] d2.engine.defaults INFO: Evaluation results for Clipart1k_test in csv format: [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: Task: bbox [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: AP,AP50,AP75 [08/25 12:39:51] d2.evaluation.testing INFO: copypaste: 0.0000,0.0000,0.0000

when i do eval only i get 0 AP, the teacher AP is zero, any fixes?

It is burn-in stage, where only the student model is trained with labeled data. you can find the setting in init function in trainer