Open EmmaLevine94 opened 3 months ago
👋 Hello @EmmaLevine94, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.
If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.
Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:
git clone https://github.com/ultralytics/yolov5 # clone
cd yolov5
pip install -r requirements.txt # install
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.
We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!
Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.
Check out our YOLOv8 Docs for details and get started with:
pip install ultralytics
@EmmaLevine94 hello! Thank you for your question.
To categorize objects based on the size of the bounding boxes during training, you can access the bounding box information from the training data. YOLOv5 stores this information in the labels
during the training process.
Here's a step-by-step approach to achieve this:
Access Bounding Box Information:
You can modify the training script to access the bounding box information. The bounding boxes are typically in the format [class, x_center, y_center, width, height]
.
Define Size Categories: You can define your thresholds for small and large objects. For example:
small_threshold = 0.1 # Example threshold for small objects
large_threshold = 0.5 # Example threshold for large objects
Categorize Bounding Boxes: During the training loop, you can categorize the bounding boxes based on their size. Here's a simplified example:
for batch_i, (imgs, targets, paths, shapes) in enumerate(dataloader):
for target in targets:
_, _, _, width, height = target
if width * height < small_threshold:
category = 'small'
elif width * height > large_threshold:
category = 'large'
else:
category = 'medium'
# You can now use this category information as needed
Modify Training Script:
You can integrate the above logic into the training script (train.py
). Make sure to handle the bounding box information appropriately within the training loop.
If you encounter any issues or have further questions, please ensure you are using the latest version of YOLOv5, as updates often include bug fixes and improvements.
Feel free to reach out if you need more assistance. Happy coding! 😊
@glenn-jocher Hello. thanks for your attention. I am trying to implement this solution. Please do not close the issue so that I can ask if I have any more questions. Thank you
Hello @EmmaLevine94,
Thank you for your message and for working on implementing the solution! I'm glad to hear that you're making progress.
Feel free to keep this issue open and ask any further questions you might have as you proceed. We're here to help and ensure you have all the support you need.
If you encounter any specific challenges or need additional code examples, please don't hesitate to reach out. We're committed to assisting you and the entire YOLO community.
Best of luck with your implementation! 😊
Hello @glenn-jocher , thank you for your help. Another question I have is regarding the confidence threshold. Where is this threshold allocated in the code? And can this threshold be defined dynamically?
Hello @EmmaLevine94,
Thank you for your follow-up question!
The confidence threshold in YOLOv5 is used to filter out detections with low confidence scores. This threshold can be found and adjusted in the inference scripts, such as detect.py
.
In detect.py
, the confidence threshold is set using the --conf-thres
argument. Here is an example of how it is used:
parser.add_argument('--conf-thres', type=float, default=0.25, help='object confidence threshold')
You can dynamically set this threshold when running the script by passing the desired value as an argument:
python detect.py --conf-thres 0.5
If you want to adjust the confidence threshold dynamically within the code, you can modify the relevant part of the script. For example, you can set the threshold based on certain conditions or parameters:
# Example of dynamically setting the confidence threshold
conf_thres = 0.25 # Default value
if some_condition:
conf_thres = 0.5 # Adjust based on your criteria
# Use the dynamically set threshold in your detection logic
pred = model(imgs, augment=opt.augment, visualize=increment_path(save_dir / Path(path).stem, mkdir=True) if opt.visualize else False)
pred = non_max_suppression(pred, conf_thres, opt.iou_thres, classes=opt.classes, agnostic=opt.agnostic_nms, max_det=opt.max_det)
For more advanced customization, you might want to delve deeper into the non_max_suppression
function in utils/general.py
, where the confidence threshold is applied during the filtering process.
If you encounter any issues or need further assistance, please ensure you are using the latest version of YOLOv5, as updates often include bug fixes and improvements.
Feel free to reach out with any more questions. We're here to help! 😊
Thank you very much for your explanation @glenn-jocher . Is the confidence threshold not used during the training process? I mean, as far as I understand, the detect.py module only makes predictions and does not update network parameters. I want to dynamically change the confidence threshold during the training process Can you give me your email so I can talk to you about my idea?
The confidence threshold is primarily used during inference in detect.py
and not during training. For training, you might want to look into modifying the loss function or the way predictions are filtered within the training loop. Unfortunately, we do not provide private support via email. Please feel free to continue the discussion here.
Search before asking
Question
Hello. I had a question that was not similar. I want to divide the objects into small and large categories based on the size of the bonding boxes that are produced during training. I want the threshold that I define to be different for each of these categories. How can I access the bonding boxes that are generated during the training? In which module are network predicates generated on training data? Thank you
Additional
No response