Open SilverStarCoder opened 1 year ago
I have also encountered this problem. Additionally, I am using a segmentation model based on v8. I have observed poor detection quality for long and thin objects, which leads to missed detections or incomplete detection. After confirming that the annotations are correct, I disabled some data augmentation strategies that could potentially discard certain labels to ensure that these instances are not lost. However, the results are still not satisfactory. Furthermore, I noticed that this phenomenon is more likely to occur with "thin" and "slender" objects. For example, there are some objects that are less than half the width of the image but very thin, and they also exhibit this issue.
Currently, I suspect that the anchor settings may play a role in this. Although YOLOv8 is an anchor-free model, there might be similar operations analogous to anchors. I am hoping that someone who has encountered the same problem or has adjusted the anchor-like components in v8 can provide us with some assistance. Thank you!
@Alingo2 , Hi, thank you for the reply. I have tried adjusting the reg_max
paramater in the nn/modules/head.py
, as it was suggested a few times before:
Hopefully adjusting it will work for you; in my case, it unfortunately did not 😭 and now I am trying different strategies to address the issue.
你好呀。
@Alingo2 , Hi, thank you for the reply. I have tried adjusting the
reg_max
paramater in thenn/modules/head.py
, as it was suggested a few times before:
- The masks generated(Segmentation) are not accurate on my custom dataset. How to improve that? #4973
- What is the significance of the Reg_Max parameter? How to adjust its value? #3072
- prediction box is too small for the object, if bounding boxes is to big Even take up the whole picture, how did i train on this dataset #2949
- The bounding box prediction is too small for the object #2843
- Smaller object width in yolov8 #3684
Hopefully adjusting it will work for you; in my case, it unfortunately did not 😭 and now I am trying different strategies to address the issue.
Thank you for your adivise! I'll try this method and update the results! Hopying you can solve it soon!
@glenn-jocher , my objects with high aspect ratios (long and thin) are still not being detected correctly even after adjusting the reg_max
parameter, when my image size is 1280 or larger. I have tried using values of 32 and 64 and observed not much difference in the results. I have also tried experimenting by increasing box loss gain and dfl gain, and even initial learning rate, but observed no improved detections for these objects. Is there an alternative approach I could try, like forcing it to ONLY use a set of bounding boxes of my desired size and aspect ratio that I specify?
@Alingo2 hi there,
Although YOLOv8 does not depend on predefined anchor boxes, it does take into consideration the size and aspect ratio of the detected objects during training. However, you can't force YOLOv8 to only use specific bounding box sizes or aspect ratios as it's not how the architecture is designed.
The issue of detecting objects with high aspect ratios may be partly due its nature of equally distributing predictive and regression tasks across all scales. Very small, large, or elongated objects may sometimes present difficulties for such models.
For high aspect ratio objects, one challenge is that they may span multiple grid cells in the detection layer, and each cell might only partially detect the object, leading to the complete object not being detected. This effect can be magnified with larger image sizes.
Considering your attempts at manipulating reg_max
didn't change the outcome, you might want to consider adapting your preprocessing steps if possible. For instance, images could be adjusted such that elongated objects occupy a sectional part of the image rather than spanning the entire length, allowing them to be encompassed within fewer grid cells.
Switching from square to rectangular images while maintaining the core aspect ratio of your objects could be beneficial. For instance, if your objects are vertically elongated, using images with a higher vertical resolution may help.
Alternatively, you could try training separate models for different classes or types of objects in your images, if appropriate. This approach, however, might be more complex and may not always be feasible.
Nonetheless, I understand your concerns and we will definitely take them into consideration for future developments of the model. We are continuously working on improving the model's performance on a wide range of objects and conditions.
Thanks for testing, experimenting, and providing your observations. They are indeed valuable to the community - it's through inputs like these that we can make continuous improvements.
Do let us know if you see improvement from making any of the above changes or share any other successful alternative approaches you come across.
@Alingo2 hi there,
Although YOLOv8 does not depend on predefined anchor boxes, it does take into consideration the size and aspect ratio of the detected objects during training. However, you can't force YOLOv8 to only use specific bounding box sizes or aspect ratios as it's not how the architecture is designed.
The issue of detecting objects with high aspect ratios may be partly due its nature of equally distributing predictive and regression tasks across all scales. Very small, large, or elongated objects may sometimes present difficulties for such models.
For high aspect ratio objects, one challenge is that they may span multiple grid cells in the detection layer, and each cell might only partially detect the object, leading to the complete object not being detected. This effect can be magnified with larger image sizes.
Considering your attempts at manipulating
reg_max
didn't change the outcome, you might want to consider adapting your preprocessing steps if possible. For instance, images could be adjusted such that elongated objects occupy a sectional part of the image rather than spanning the entire length, allowing them to be encompassed within fewer grid cells.Switching from square to rectangular images while maintaining the core aspect ratio of your objects could be beneficial. For instance, if your objects are vertically elongated, using images with a higher vertical resolution may help.
Alternatively, you could try training separate models for different classes or types of objects in your images, if appropriate. This approach, however, might be more complex and may not always be feasible.
Nonetheless, I understand your concerns and we will definitely take them into consideration for future developments of the model. We are continuously working on improving the model's performance on a wide range of objects and conditions.
Thanks for testing, experimenting, and providing your observations. They are indeed valuable to the community - it's through inputs like these that we can make continuous improvements.
Do let us know if you see improvement from making any of the above changes or share any other successful alternative approaches you come across.
Thank you very much for your patient and detailed response! I have successfully solved the problem perfectly by setting reg_max to 32! And this hasn't resulted in a significant increase in computation (around 5% approximately). Thank you once again for your tremendous assistance!
@Alingo2 i'm glad to hear that adjusting the reg_max
parameter to 32 has addressed the issue for you and hasn't significantly impacted the computation time. It's a pleasure to be able to assist. Thanks for your patience in experimenting with the solutions and for sharing your results. It's feedbacks like these that enables us to continuously improve. If you encounter any other issues or concerns, please feel free to reach out. Happy coding!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!如果您有任何与YOLOv8相关的问题或者需要讨论,欢迎和我们分享。我们会很乐意帮忙!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!有什么可以帮忙的吗?如果您对YOLOv8有任何问题或疑问,请随时告诉我!🙂
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!如需帮助,请告诉我您的问题或需求。
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!如果您有关于YOLOv8的问题,我在这里帮助您。👋😊
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!👋 如果有任何问题或需要帮助,请随时告诉我!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!😊 如果有任何关于YOLOv8的问题或需要帮助,请随时告诉我!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
你好!😊 如果你有任何关于YOLOv8的问题或者需要帮助,请随时告诉我!
@Alingo2 hi there, Although YOLOv8 does not depend on predefined anchor boxes, it does take into consideration the size and aspect ratio of the detected objects during training. However, you can't force YOLOv8 to only use specific bounding box sizes or aspect ratios as it's not how the architecture is designed. The issue of detecting objects with high aspect ratios may be partly due its nature of equally distributing predictive and regression tasks across all scales. Very small, large, or elongated objects may sometimes present difficulties for such models. For high aspect ratio objects, one challenge is that they may span multiple grid cells in the detection layer, and each cell might only partially detect the object, leading to the complete object not being detected. This effect can be magnified with larger image sizes. Considering your attempts at manipulating
reg_max
didn't change the outcome, you might want to consider adapting your preprocessing steps if possible. For instance, images could be adjusted such that elongated objects occupy a sectional part of the image rather than spanning the entire length, allowing them to be encompassed within fewer grid cells. Switching from square to rectangular images while maintaining the core aspect ratio of your objects could be beneficial. For instance, if your objects are vertically elongated, using images with a higher vertical resolution may help. Alternatively, you could try training separate models for different classes or types of objects in your images, if appropriate. This approach, however, might be more complex and may not always be feasible. Nonetheless, I understand your concerns and we will definitely take them into consideration for future developments of the model. We are continuously working on improving the model's performance on a wide range of objects and conditions. Thanks for testing, experimenting, and providing your observations. They are indeed valuable to the community - it's through inputs like these that we can make continuous improvements. Do let us know if you see improvement from making any of the above changes or share any other successful alternative approaches you come across.Thank you very much for your patient and detailed response! I have successfully solved the problem perfectly by setting reg_max to 32! And this hasn't resulted in a significant increase in computation (around 5% approximately). Thank you once again for your tremendous assistance!
Hi, you mentioned change reg_max
to 32 didn't change the result and later it did. What is the difference between those 2 cases? Or finally you confirm change reg_max
to 32 actually works?
你好呀。
@Alingo2 你好!😊
很高兴见到你!如果你有任何关于YOLOv8的问题或者需要帮助,请随时告诉我。我在这里为你提供帮助和支持。无论是关于模型训练、配置调整还是其他任何问题,我都会尽力为你解答。
如果你有具体的问题或需要进一步的指导,请随时提问!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!😊
感谢你的留言。如果你有任何关于YOLOv8的问题或需要帮助,请随时告诉我。我在这里为你提供支持和解答。
如果你遇到任何问题,请提供一些详细信息,例如你正在使用的代码示例、遇到的错误信息以及你已经尝试过的解决方法。这将帮助我们更好地理解和解决你的问题。
你也可以查看我们的常见问题解答页面,那里包含了许多常见问题的解决方案。
期待你的回复!
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
你好!如果你有任何关于YOLOv8的问题或需要帮助,请告诉我。我会尽力为你提供支持和解答。
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
你好呀。
@Alingo2 你好!如果你有任何关于Ultralytics或YOLOv8的问题,请随时在这里提问,我会尽力为你提供帮助。
Search before asking
Question
I am attempting to train a YOLOv8-Seg model on my unique dataset and have encountered a specific issue. My dataset contains images with objects that have a very elongated shape, spanning nearly the entire length of the image. Additionally, these objects are thin, and there are other types of objects with different sizes and shapes.
When I resize my images to a 640x640 resolution (3840x2160 is original image size), there's a significant loss of detail due to the thin nature of my objects. At a 1280 image size, only certain portions of these elongated objects get detected. Upon using even larger resolutions, no detections occur at all for these elongated objects (only the smaller 'simpler' shaped objects get detected).
Considering my observations and given that YOLOv8 is "anchor-free", I'm leaning towards the hypothesis that this issue might stem from the default configurations related to grid cells or other similar structures. I understand that YOLOv8 does not rely on predefined anchors, allowing more flexibility in detecting objects of varying sizes and aspect ratios. However, I believe the default settings for this are not well suited for my dataset.
Could someone guide me on adjusting relevant settings to cater to my specific data characteristics?
Thank you in advance for your help!
Additional
No response