ultralytics / ultralytics

Ultralytics YOLO11 🚀
https://docs.ultralytics.com
GNU Affero General Public License v3.0
33.25k stars 6.4k forks source link

YOLOV8 CBAM adding issuse #13284

Closed sergenerbay closed 4 months ago

sergenerbay commented 6 months ago

Search before asking

Question

Hello, I'm trying yolov8.yaml configurete with CBAM module but i having issuse. How I can fix problem? I researched the problem in the issues section, but I couldn't fix the problem.

`# YOLOv8.0n backbone backbone:

[from, repeats, module, args]

YOLOv8.0n head

head:

Additional

No response

github-actions[bot] commented 6 months ago

👋 Hello @sergenerbay, thank you for your interest in Ultralytics YOLOv8 🚀! We recommend a visit to the Docs for new users where you can find many Python and CLI usage examples and where many of the most common questions may already be answered.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Join the vibrant Ultralytics Discord 🎧 community for real-time conversations and collaborations. This platform offers a perfect space to inquire, showcase your work, and connect with fellow Ultralytics users.

Install

Pip install the ultralytics package including all requirements in a Python>=3.8 environment with PyTorch>=1.8.

pip install ultralytics

Environments

YOLOv8 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

Ultralytics CI

If this badge is green, all Ultralytics CI tests are currently passing. CI tests verify correct operation of all YOLOv8 Modes and Tasks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

glenn-jocher commented 6 months ago

@sergenerbay hello,

Thank you for reaching out and providing detailed information about your issue with integrating the CBAM module into the YOLOv8 configuration. It seems like there might be a compatibility or configuration issue with how the CBAM module is integrated into your model architecture.

To better assist you, could you please provide the specific error message or behavior you're encountering when you add the CBAM module? Knowing the exact error will help in diagnosing the issue more effectively.

In the meantime, ensure that all module dependencies for CBAM are correctly installed and that the module's input and output dimensions align with your model's architecture at the point of integration.

Looking forward to your response to assist you further!

sergenerbay commented 6 months ago

Hello @glenn-jocher I solved the kernel size problem, but ı getting other problem.
Also, I using CBAM module in ultralytics library.

conv.py
class ChannelAttention(nn.Module):
    """Channel-attention module https://github.com/open-mmlab/mmdetection/tree/v3.0.0rc1/configs/rtmdet."""

    def __init__(self, channels: int) -> None:
        """Initializes the class and sets the basic configurations and instance variables required."""
        super().__init__()
        self.pool = nn.AdaptiveAvgPool2d(1)
        self.fc = nn.Conv2d(channels, channels, 1, 1, 0, bias=True)
        self.act = nn.Sigmoid()

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        """Applies forward pass using activation on convolutions of the input, optionally using batch normalization."""
        return x * self.act(self.fc(self.pool(x)))

class SpatialAttention(nn.Module):
    """Spatial-attention module."""

    def __init__(self, kernel_size=7):
        """Initialize Spatial-attention module with kernel size argument."""
        super().__init__()
        print(kernel_size)
        assert kernel_size in {3, 7}, "kernel size must be 3 or 7"
        padding = 3 if kernel_size == 7 else 1
        self.cv1 = nn.Conv2d(2, 1, kernel_size, padding=padding, bias=False)
        self.act = nn.Sigmoid()

    def forward(self, x):
        """Apply channel and spatial attention on input for feature recalibration."""
        return x * self.act(self.cv1(torch.cat([torch.mean(x, 1, keepdim=True), torch.max(x, 1, keepdim=True)[0]], 1)))

class CBAM(nn.Module):
    """Convolutional Block Attention Module."""

    def __init__(self, c1, kernel_size=7):
        """Initialize CBAM with given input channel (c1) and kernel size."""
        super().__init__()
        self.channel_attention = ChannelAttention(c1)
        kernel_size=7
        self.spatial_attention = SpatialAttention(kernel_size)

    def forward(self, x):
        """Applies the forward pass through C1 module."""
        return self.spatial_attention(self.channel_attention(x))

I am getting the problem below.

Traceback (most recent call last): File "/home/sergen/v10/denemeee.py", line 2, in model = YOLO("yolov8m-cbam.yaml") File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/models/yolo/model.py", line 23, in init super().init(model=model, task=task, verbose=verbose) File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/engine/model.py", line 150, in init self._new(model, task=task, verbose=verbose) File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/engine/model.py", line 219, in _new self.model = (model or self._smart_load("model"))(cfg_dict, verbose=verbose and RANK == -1) # build model File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 288, in init self.model, self.save = parse_model(deepcopy(self.yaml), ch=ch, verbose=verbose) # model, savelist File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 921, in parse_model args.append([ch[x] for x in f]) File "/home/sergen/.local/lib/python3.10/site-packages/ultralytics/nn/tasks.py", line 921, in args.append([ch[x] for x in f]) IndexError: list index out of range

sergenerbay commented 6 months ago

I solved my problem :))I had to make the code change carefully. The problem was in the YAML file; I did not set compatible values in the detection layer.

glenn-jocher commented 6 months ago

Hello @sergenerbay,

Great to hear that you've resolved the issue! 🎉 Careful adjustments in the YAML configuration can indeed make a significant difference. If you have any more questions or run into further issues, feel free to reach out. Happy coding with YOLOv8!

sergenerbay commented 6 months ago

I actually want to input two images (thermal and RGB) in the next step. I want to pass each of these inputs through separate backbones and then fuse the information. Is this possible?

glenn-jocher commented 6 months ago

Hello,

Yes, it's definitely possible to input two different types of images (thermal and RGB) and process them through separate backbones before fusing the information. You'll need to modify your model architecture to handle dual inputs and ensure that the features extracted from both backbones are compatible for fusion.

You might consider using a two-stream network where each stream processes one type of input. Afterward, you can merge these streams using various fusion techniques such as concatenation, element-wise addition, or more complex operations depending on your application's requirements.

If you need specific guidance on how to implement this in YOLOv8, feel free to ask!

kashinou commented 5 months ago

@sergenerbay

Hello,

I have encountered the same error and have not been able to resolve it. So I would like to know what changes you have made to solve the problem.

sergenerbay commented 5 months ago

Hello, If you are asking how I solved the first problem. [-1,1, CBAM, [1024]]

Because I used the CBAM module in my structure, I defined the kernel_size inside it and solved the first problem.

class CBAM(nn.Module):
    """Convolutional Block Attention Module."""

    def __init__(self, c1, kernel_size=7):
        """Initialize CBAM with given input channel (c1) and kernel size."""
        super().__init__()
        self.channel_attention = ChannelAttention(c1)
        **kernel_size=7**
        self.spatial_attention = SpatialAttention(kernel_size)

    def forward(self, x):
        """Applies the forward pass through C1 module."""
        return self.spatial_attention(self.channel_attention(x))
glenn-jocher commented 5 months ago

Hello @sergenerbay,

Thank you for sharing your solution! It's great to see that you were able to resolve the issue by defining the kernel_size inside the CBAM module. This is indeed a crucial step to ensure the module works correctly within the YOLOv8 architecture.

For others encountering similar issues, here's a quick summary of the solution:

  1. Define the Kernel Size: Ensure that the kernel_size is defined within the CBAM class.
  2. Modify the YAML Configuration: Make sure your YAML configuration correctly references the CBAM module.

Here's the updated CBAM class for reference:

class CBAM(nn.Module):
    """Convolutional Block Attention Module."""

    def __init__(self, c1, kernel_size=7):
        """Initialize CBAM with given input channel (c1) and kernel size."""
        super().__init__()
        self.channel_attention = ChannelAttention(c1)
        self.spatial_attention = SpatialAttention(kernel_size)

    def forward(self, x):
        """Applies the forward pass through C1 module."""
        return self.spatial_attention(self.channel_attention(x))

Additionally, ensure your YAML configuration includes the CBAM module correctly:

# YOLOv8.0n backbone
backbone:
  # [from, repeats, module, args]
  - [-1, 1, Conv, [64, 3, 2]] # 0-P1/2
  - [-1, 1, Conv, [128, 3, 2]] # 1-P2/4
  - [-1, 3, C2f, [128, True]]
  - [-1, 1, Conv, [256, 3, 2]] # 3-P3/8
  - [-1, 6, C2f, [256, True]]
  - [-1, 1, Conv, [512, 3, 2]] # 5-P4/16
  - [-1, 6, C2f, [512, True]]
  - [-1, 1, Conv, [1024, 3, 2]] # 7-P5/32
  - [-1, 3, C2f, [1024, True]]
  - [-1, 1, CBAM, [1024]]
  - [-1, 1, SPPF, [1024, 5]] # 9

If anyone else is facing similar issues, please ensure you are using the latest versions of torch and ultralytics. If the problem persists, providing a minimum reproducible example would be very helpful for further investigation. You can find more details on creating a reproducible example here.

Feel free to reach out if you have any more questions or need further assistance. Happy coding! 🚀

kashinou commented 5 months ago

Thank you for comments.

I solved the problem by defining the kernel size in the structure as you have shown me. Thank you very much.

glenn-jocher commented 5 months ago

Hello @kashinou,

I'm glad to hear that you were able to resolve the issue by defining the kernel size in the CBAM structure! 🎉 It's always rewarding to see solutions come together.

If you have any further questions or run into any other issues, feel free to reach out. We're here to help! Also, if you have any interesting results or insights from your work with YOLOv8 and CBAM, we'd love to hear about them. Sharing your experiences can be incredibly valuable to the community.

Happy coding and best of luck with your project! 🚀

github-actions[bot] commented 4 months ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐

phoenix-JM commented 1 month ago

Do we just need to update .yaml file to add CBAM layer or we need to update tasks.py file as well ??

glenn-jocher commented 1 month ago

You need to update both the .yaml file to include the CBAM layer and ensure the tasks.py file recognizes the new module.