iKrishneel / efficient_net_v2

Pytorch implementation of efficientnet v2 backbone with detectron2 for object detection (Just for fun)
https://arxiv.org/pdf/2104.00298.pdf
11 stars 3 forks source link

Results are not good after training efficientnetv2 as backbone . #3

Closed akashAD98 closed 3 years ago

akashAD98 commented 3 years ago

I have tried coco2017train data to train this model . its trained for 2 lakhs steps & the loss is in the range of 0.7 to 1 for all 2 lakh steps. & results are worst on validation data .

I have attached all files , please have look & let me know. What's wrong with this model or what mistake i have done during the training model ?

  1. the command used for training model

tried faster_rcnn_R50_FPN_weight from detectron2.

python build.py --config-file D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\config\effnet_coco.yaml --num_gpus 1 --weights D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\weights\faster_rcnn_R50_FPN.pkl --output_dir 'D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\output_DIR_base2

Im using AWS machine Tesla v4 for training .

config file config.yaml.txt

log file: logfile.txt

MAP on coco2017 validation data

stances_results.json [06/26 09:25:29 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... Loading and preparing results... DONE (t=0.58s) creating index... index created! [06/26 09:25:29 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox [06/26 09:25:39 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 9.36 seconds. [06/26 09:25:39 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [06/26 09:25:40 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 1.39 seconds. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.042 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.086 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.037 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.040 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.068 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.089 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.133 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.135 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.022 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.121 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.219 [06/26 09:25:40 d2.evaluation.coco_evaluation]: Evaluation results for bbox:

AP AP50 AP75 APs APm APl
4.233 8.554 3.689 1.000 4.044 6.817
[06/26 09:25:40 d2.evaluation.coco_evaluation]: Per-category bbox AP: category AP category AP category AP
person 15.760 bicycle 2.278 car 5.338
motorcycle 5.625 airplane 12.439 bus 12.790
train 10.054 truck 4.306 boat 1.222
traffic light 1.645 fire hydrant 11.190 stop sign 30.312
parking meter 1.661 bench 1.817 bird 0.485
cat 5.196 dog 2.964 horse 6.189
sheep 4.369 cow 7.304 elephant 9.945
bear 13.253 zebra 16.255 giraffe 12.170
backpack 0.097 umbrella 3.546 handbag 0.051
tie 2.221 suitcase 0.408 frisbee 1.692
skis 0.809 snowboard 0.235 sports ball 7.328
kite 3.586 baseball bat 0.000 baseball glove 0.480
skateboard 1.210 surfboard 0.653 tennis racket 1.349
bottle 0.927 wine glass 0.074 cup 1.602
fork 0.000 knife 0.031 spoon 0.000
bowl 3.918 banana 0.074 apple 0.512
sandwich 4.099 orange 4.264 broccoli 0.921
carrot 0.441 hot dog 1.173 pizza 13.876
donut 1.113 cake 1.586 chair 0.631
couch 3.786 potted plant 0.075 bed 6.309
dining table 8.186 toilet 13.220 tv 14.340
laptop 6.258 mouse 0.568 remote 0.000
keyboard 0.885 cell phone 1.573 microwave 4.888
oven 3.542 toaster 0.000 sink 4.386
refrigerator 5.160 book 0.333 clock 8.559
vase 0.852 scissors 0.000 teddy bear 2.242
hair drier 0.000 toothbrush 0.000

[06/26 09:25:41 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format: [06/26 09:25:41 d2.evaluation.testing]: copypaste: Task: bbox [06/26 09:25:41 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [06/26 09:25:41 d2.evaluation.testing]: copypaste: 4.2330,8.5540,3.6886,1.0001,4.0443,6.8172

This are few results: output_of_000000003501 output_of_000000000632 output_of_000000001000 output_of_000000001761 output_of_000000002587 output_of_000000002592

Thank you so much, I'm looking forward to your response.

iKrishneel commented 3 years ago

For faster_rcnn_R50_FPN_weight you will need to your ResNet50 backbone. This repo uses efficient_net_v2 as backbone. You can try with this weight

akashAD98 commented 3 years ago

thanks . by the way how did you got efficient_net_v2 weight? in the official repo, they haven't published the weight. is there any way to generate these weights using ckpt ?

iKrishneel commented 3 years ago

I trained it from scratch for few epoch.

akashAD98 commented 3 years ago

1.can you share which weight you have used for training it from scratch?

2.I trained on the top of your weight i.e (40k steps), your model AP50: 47%, & when I trained my model i got AP50 : 41 % MAP & my batch size is 4 & Even I set very less learning rate. what is the reason behind it?

3.its possible to get the best weight is there any script?

iKrishneel commented 3 years ago

The weight I shared previously should be good initialization for fine-tuning on downstream tasks. You will have to play around with the hyperparameters. If using a custom dataset with fewer samples, maybe it will be better to freeze some backbone layers.

ammarmuflih commented 1 year ago

@iKrishneel do you have mask r-cnn weight?