Closed akashAD98 closed 3 years ago
For faster_rcnn_R50_FPN_weight
you will need to your ResNet50
backbone. This repo uses efficient_net_v2
as backbone.
You can try with this weight
thanks . by the way how did you got efficient_net_v2 weight? in the official repo, they haven't published the weight. is there any way to generate these weights using ckpt ?
I trained it from scratch for few epoch.
1.can you share which weight you have used for training it from scratch?
2.I trained on the top of your weight i.e (40k steps), your model AP50: 47%, & when I trained my model i got AP50 : 41 % MAP & my batch size is 4 & Even I set very less learning rate. what is the reason behind it?
3.its possible to get the best weight is there any script?
The weight I shared previously should be good initialization for fine-tuning on downstream tasks. You will have to play around with the hyperparameters. If using a custom dataset with fewer samples, maybe it will be better to freeze some backbone layers.
@iKrishneel do you have mask r-cnn weight?
I have tried coco2017train data to train this model . its trained for 2 lakhs steps & the loss is in the range of 0.7 to 1 for all 2 lakh steps. & results are worst on validation data .
I have attached all files , please have look & let me know. What's wrong with this model or what mistake i have done during the training model ?
tried faster_rcnn_R50_FPN_weight from detectron2.
python build.py --config-file D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\config\effnet_coco.yaml --num_gpus 1 --weights D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\weights\faster_rcnn_R50_FPN.pkl --output_dir 'D:\TFOD_efficientdet\efficient_net_v2\efficient_net_v2\output_DIR_base2
Im using AWS machine Tesla v4 for training .
config file config.yaml.txt
log file: logfile.txt
MAP on coco2017 validation data
stances_results.json [06/26 09:25:29 d2.evaluation.coco_evaluation]: Evaluating predictions with unofficial COCO API... Loading and preparing results... DONE (t=0.58s) creating index... index created! [06/26 09:25:29 d2.evaluation.fast_eval_api]: Evaluate annotation type bbox [06/26 09:25:39 d2.evaluation.fast_eval_api]: COCOeval_opt.evaluate() finished in 9.36 seconds. [06/26 09:25:39 d2.evaluation.fast_eval_api]: Accumulating evaluation results... [06/26 09:25:40 d2.evaluation.fast_eval_api]: COCOeval_opt.accumulate() finished in 1.39 seconds. Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.042 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.086 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.037 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.010 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.040 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.068 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.089 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.133 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.135 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.022 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.121 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.219 [06/26 09:25:40 d2.evaluation.coco_evaluation]: Evaluation results for bbox:
[06/26 09:25:41 d2.engine.defaults]: Evaluation results for coco_2017_val in csv format: [06/26 09:25:41 d2.evaluation.testing]: copypaste: Task: bbox [06/26 09:25:41 d2.evaluation.testing]: copypaste: AP,AP50,AP75,APs,APm,APl [06/26 09:25:41 d2.evaluation.testing]: copypaste: 4.2330,8.5540,3.6886,1.0001,4.0443,6.8172
This are few results:
Thank you so much, I'm looking forward to your response.