Hello,
I've followed the modification you have done to the ultralytics repo to modify my instance segmentation model adding the RESCBAM in the nn/task.py, nn/modules/conv.py, the init.py and the yaml file in the cfg. But the number of params that this added on the model is more than what you have reported in yoyr paper. In particular : 322.3 GFLOP for m model , 140.3 GFLOP for s model and 42.9 GFLOP for the n model.
Do you have any idea why it is so big th enumber of GFLOP, how can i decrease this number to be able to perform faster training.
Hello, I've followed the modification you have done to the ultralytics repo to modify my instance segmentation model adding the RESCBAM in the nn/task.py, nn/modules/conv.py, the init.py and the yaml file in the cfg. But the number of params that this added on the model is more than what you have reported in yoyr paper. In particular : 322.3 GFLOP for m model , 140.3 GFLOP for s model and 42.9 GFLOP for the n model. Do you have any idea why it is so big th enumber of GFLOP, how can i decrease this number to be able to perform faster training.
Thank you in advance.
Here is my git rep where i did these changes, i added even a P2 layer buut even without it i have high values of GFLOP, and I am sure that is the implementation of RESCBAM that does this (https://github.com/haitamrarhai/ultralytics_2.0/tree/haitamrarhai-patch-1/).