YOLOonMe / EMA-attention-module

Implementation Code for the ICCASSP 2023 paper " Efficient Multi-Scale Attention Module with Cross-Spatial Learning" and is available at: https://arxiv.org/abs/2305.13563v2
155 stars 8 forks source link

add attention model #2

Open ChenJian7578 opened 1 year ago

ChenJian7578 commented 1 year ago

What is the position of the attention module added in the network when you conduct the experiment?

YOLOonMe commented 1 year ago

@ChenJian7578 the inserted position is the same as all comparison schemes, such as NAM, CoordAttention, and YOLOv5x

ChenJian7578 commented 1 year ago

emm... Using v5 as an example, can you explain in detail where you add the attention module? this might provide some reference for my current work on increasing the performance of the network on its own data set by adding attention modules

YOLOonMe commented 1 year ago

Attention modules are usually placed on the deep network layer. In our paper, all attention modules are inserted into the last two C3 layers (Behind C3 layers) of yolov5s Backbone (I think the feature expression performance of shallow feature layer is strong enough, it can do without attention mechanism). More information you can refer to [yolov5_research] (https://github.com/positive666/yolov5_research).

ransf-forever commented 1 year ago

May I ask if the source code for EMA attention module is publicly available?

YOLOonMe commented 1 year ago

May I ask if the source code for EMA attention module is publicly available?

@ranty23 Yes, you can find it here gradcam, yolov5s , yolov5x

ransf-forever commented 1 year ago

@YOLOonMe Hi,thank you very much for your reply, but I have not found a standalone EMA module in it. Do you currently have a plug and play module?

GH-Fonic commented 1 year ago

@YOLOonMe Hello, I also need an ema module code to add to my network. Can you add a file with only an ema module?Thank you.

Bewear777 commented 7 months ago

Attention modules are usually placed on the deep network layer. In our paper, all attention modules are inserted into the last two C3 layers (Behind C3 layers) of yolov5s Backbone (I think the feature expression performance of shallow feature layer is strong enough, it can do without attention mechanism). More information you can refer to [yolov5_research] (https://github.com/positive666/yolov5_research).

In YOLO, EMA_attention if placed behind the detection head. What should be the factor factor set for grouping? If the output is 64 or 32 channels, does the factor still need to be set to 32?

Sheeran2000 commented 1 day ago

Attention 模块一般放在深层网络层,本文中所有 Attention 模块都插入到了 yolov5s Backbone 的最后两层 C3 层(Behind C3 layer)(个人认为浅层特征层的特征表达能力已经足够强,可以不用 Attention 机制),更多信息可以参考 [yolov5_research] ( https://github.com/positive666/yolov5_research )。

YOLO中EMA_attention放在检测head后面,grouping时factor应该设置多少?如果输出是64或者32通道,factor还需要设置成32吗?

请问您找到最佳Factor参数的设置了吗