Closed yustaub closed 3 years ago
Hi @glenn-jocher, I added False positive image in the dataset and retrained the model. On the testing model, False positive issue is resolved but it also increases the False Negative. Why it is happening and how to handle False negatives encountered?
@Mps24-7uk hi,
Including false positive images in the dataset can help reduce the issue of false positives during testing. However, it is possible that this approach could lead to an increase in false negatives. False negatives occur when the model fails to detect an object that is present in the image.
To handle false negatives, you can consider the following steps:
Data augmentation: Increase the variety and diversity of your training data by applying various data augmentation techniques such as random cropping, rotation, and scaling. This can help the model learn to detect objects from different perspectives and in different conditions.
Adjusting anchor boxes: YOLOv5 uses anchor boxes to predict object positions and sizes. Reviewing the sizes and aspect ratios of your anchor boxes and adjusting them based on the characteristics of your dataset can help improve object detection accuracy.
Fine-tuning model architecture: Experiment with different YOLOv5 architectures such as yolov5s
, yolov5m
, or yolov5x
. Each has different sizes and capacities, and this might impact the model's accuracy in detecting objects.
Balancing dataset: Ensure that your training dataset has a balanced representation of positive and negative samples. If your dataset is heavily imbalanced, the model might struggle to learn to detect objects accurately. Techniques such as oversampling or undersampling can help address this.
Implementing these strategies can help improve your model's ability to handle false negatives and enhance overall object detection performance.
I hope this helps! If you have any further questions or need more assistance, feel free to ask.
Maybe I'm not very bright;
in a classification task should they be put in a folder with the name background
as if it was a different class ?
@stormsson hi there! No worries, happy to clarify 😊. In a classification task using YOLOv5, background images should indeed be treated similarly to your object classes but with no objects of interest. So, you'll add these background images into your dataset without any corresponding labeling files (i.e., .txt
files with annotations) for them. This way, the model learns to distinguish between your object of interest and background noise. You don't need to put them in a specific folder named background
, just ensure they are part of your dataset without labels. Hope this helps!
@stormsson hi there! No worries, happy to clarify 😊. In a classification task using YOLOv5, background images should indeed be treated similarly to your object classes but with no objects of interest. So, you'll add these background images into your dataset without any corresponding labeling files (i.e.,
.txt
files with annotations) for them. This way, the model learns to distinguish between your object of interest and background noise. You don't need to put them in a specific folder namedbackground
, just ensure they are part of your dataset without labels. Hope this helps!
Hi @glenn-jocher thank you for your , as always, quick response; It's still unclear for me: in the classification single class classifier folder structure there are no .txt files: just the images in the folders:
- images / train / classname1 / *.jpg
- images / train / classname2 / *.jpg
- images / val / classname1 / *.jpg
- ...
and so on
should they be put in /images
? maybe in /images/train
AND /images/val
? something else ?
UPDATE:
having the bg images in /images/train/*.jpg doesn't work , it returns
ERROR ❌️ requires N classes, not N+1
Hi @stormsson,
Thanks for your follow-up question, and apologies for any confusion. In the context of YOLOv5 for a single-class classification task, it seems there might have been a misunderstanding.
For YOLOv5, which primarily focuses on object detection, background images are used differently. However, if you're working on a classification-only project without the need for detecting object positions, the framework for handling "background" images might not apply as directly.
In a strict classification setup, every image is assumed to belong to a class. So, if you're trying to include "background" or "none" as a class to distinguish it from your object of interest, you would indeed treat it as a separate class. You should create a folder for it within both your training and validation datasets like so:
- images/train/background/*.jpg
- images/val/background/*.jpg
This approach essentially treats "background" as a class of its own, allowing the classifier to learn it as a distinct category. However, this goes beyond YOLOv5's primary use-case of object detection and into the territory of custom classification tasks.
It's crucial to note, in the context of the YOLOv5 architecture and its typical use cases (object detection), background images (without objects of interest) are included directly in training and validation sets without labels to improve detection performance by teaching the model what is not an object. This strategy might not be directly applicable or necessary for a simple classification task without detection components.
For specific classification tasks or architectures beyond the typical usage of YOLOv5, custom handling and code modifications might be required.
I hope this clears up any confusion! Let me know if you have other questions.
@Razamalik4497 hello,
The
--img-dir
and--bg-dir
arguments are not currently implemented in YOLOv5 training, which is why you are receiving an error when attempting to use them. To incorporate background images into your training pipeline, you can include them directly alongside your object images in thetrain/images
andvalid/images
directories, and then label them accordingly as "0" using empty.txt
files.Additionally, it's important to ensure that your background images are representative of the scenes the model will encounter during inference, in order to improve generalization and avoid overfitting. You may also want to consider using data augmentation techniques, such as random cropping, scaling, and rotation, to generate additional background images.
I hope this clarifies the situation! If you have any further questions or concerns, please don't hesitate to ask.
Best regards.
Can you please explain, what do you mean by keeping label as "0" while making an empty annotation file?
❔Question
Hello, sir. in Tips for Best Training Results, you recommend about 0-10% background images to help reduce FPs, how to use Background images in training? just add Background images into training images or add Background images into training images and add corresponding empty txt labels into training labels? Very appreciate for your reply!!