ultralytics / yolov5

YOLOv5 πŸš€ in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.6k stars 16.1k forks source link

The learned data is recognized by YOLOV5's detect.py , but only one of them overlaps and is not recognized at the same time #12836

Closed jjang-yu closed 4 months ago

jjang-yu commented 5 months ago

Search before asking

Question

I want to recognize overlapping objects together, but only one learned by YOLOV5's detect.py is recognized and not at the same time. How can I make the Y and R attached to the tube also be recognized? Y and R The precision-recall curve is 0.995%.

캑처

import argparse import csv import os import platform import sys from pathlib import Path

import torch

FILE = Path(file).resolve() ROOT = FILE.parents[0] # YOLOv5 root directory if str(ROOT) not in sys.path: sys.path.append(str(ROOT)) # add ROOT to PATH ROOT = Path(os.path.relpath(ROOT, Path.cwd())) # relative

from ultralytics.utils.plotting import Annotator, colors, save_one_box

from models.common import DetectMultiBackend from utils.dataloaders import IMG_FORMATS, VID_FORMATS, LoadImages, LoadScreenshots, LoadStreams from utils.general import ( LOGGER, Profile, check_file, check_img_size, check_imshow, check_requirements, colorstr, cv2, increment_path, non_max_suppression, print_args, scale_boxes, strip_optimizer, xyxy2xywh, ) from utils.torch_utils import select_device, smart_inference_mode

@smart_inference_mode() def run( weights=ROOT / "dataset/all.pt", # model path or triton URL source=ROOT / "dataset/valid/images", # file/dir/URL/glob/screen/0(webcam) data=ROOT / "dataset/all_data.yaml", # dataset.yaml path imgsz=(320, 240), # inference size (height, width) conf_thres=0.65, # confidence threshold iou_thres=0.81, # NMS IOU threshold max_det=1000, # maximum detections per image device="", # cuda device, i.e. 0 or 0,1,2,3 or cpu view_img=False, # show results save_txt=False, # save results to *.txt save_csv=False, # save results in CSV format save_conf=False, # save confidences in --save-txt labels save_crop=False, # save cropped prediction boxes nosave=False, # do not save images/videos classes=None, # filter by class: --class 0, or --class 0 2 3 agnostic_nms=True, # class-agnostic NMS augment=False, # augmented inference visualize=False, # visualize features update=False, # update all models project=ROOT / "runs/detect", # save results to project/name name="exp", # save results to project/name exist_ok=False, # existing project/name ok, do not increment line_thickness=3, # bounding box thickness (pixels) hide_labels=False, # hide labels hide_conf=False, # hide confidences half=False, # use FP16 half-precision inference dnn=False, # use OpenCV DNN for ONNX inference vid_stride=1, # video frame-rate stride ): source = str(source) save_img = not nosave and not source.endswith(".txt") # save inference images is_file = Path(source).suffix[1:] in (IMG_FORMATS + VID_FORMATS) is_url = source.lower().startswith(("rtsp://", "rtmp://", "http://", "https://")) webcam = source.isnumeric() or source.endswith(".streams") or (is_url and not is_file) screenshot = source.lower().startswith("screen") if is_url and is_file: source = check_file(source) # download

# Directories
save_dir = increment_path(Path(project) / name, exist_ok=exist_ok)  # increment run
(save_dir / "labels" if save_txt else save_dir).mkdir(parents=True, exist_ok=True)  # make dir

# Load model
device = select_device(device)
model = DetectMultiBackend(weights, device=device, dnn=dnn, data=data, fp16=half)
stride, names, pt = model.stride, model.names, model.pt
imgsz = check_img_size(imgsz, s=stride)  # check image size

# Dataloader
bs = 16  # batch_size
if webcam:
    view_img = check_imshow(warn=True)
    dataset = LoadStreams(source, img_size=imgsz, stride=stride, auto=pt,           vid_stride=vid_stride)
    bs = len(dataset)
elif screenshot:
    dataset = LoadScreenshots(source, img_size=imgsz, stride=stride, auto=pt)
else:
    dataset = LoadImages(source, img_size=imgsz, stride=stride, auto=pt, vid_stride=vid_stride)
vid_path, vid_writer = [None] * bs, [None] * bs

# Run inference
model.warmup(imgsz=(1 if pt or model.triton else bs, 3, *imgsz))  # warmup
seen, windows, dt = 0, [], (Profile(device=device), Profile(device=device), Profile(device=device))
for path, im, im0s, vid_cap, s in dataset:
    with dt[0]:
        im = torch.from_numpy(im).to(model.device)
        im = im.half() if model.fp16 else im.float()  # uint8 to fp16/32
        im /= 255  # 0 - 255 to 0.0 - 1.0
        if len(im.shape) == 3:
            im = im[None]  # expand for batch dim
        if model.xml and im.shape[0] > 1:
            ims = torch.chunk(im, im.shape[0], 0)

    # Inference
    with dt[1]:
        visualize = increment_path(save_dir / Path(path).stem, mkdir=True) if visualize else False
        if model.xml and im.shape[0] > 1:
            pred = None
            for image in ims:
                if pred is None:
                    pred = model(image, augment=augment, visualize=visualize).unsqueeze(0)
                else:
                    pred = torch.cat((pred, model(image, augment=augment, visualize=visualize).unsqueeze(0)), dim=0)
            pred = [pred, None]
        else:
            pred = model(im, augment=augment, visualize=visualize)
    # NMS
    with dt[2]:
        pred = non_max_suppression(pred, conf_thres, iou_thres, classes, agnostic_nms, max_det=max_det)

    # Second-stage classifier (optional)
    # pred = utils.general.apply_classifier(pred, classifier_model, im, im0s)

    # Define the path for the CSV file
    csv_path = save_dir / "predictions.csv"

    # Create or append to the CSV file
    def write_to_csv(image_name, prediction, confidence):
        """Writes prediction data for an image to a CSV file, appending if the file exists."""
        data = {"Image Name": image_name, "Prediction": prediction, "Confidence": confidence}
        with open(csv_path, mode="a", newline="") as f:
            writer = csv.DictWriter(f, fieldnames=data.keys())
            if not csv_path.is_file():
                writer.writeheader()
            writer.writerow(data)

    # Process predictions
    for i, det in enumerate(pred):  # per image
        seen += 1
        if webcam:  # batch_size >= 1
            p, im0, frame = path[i], im0s[i].copy(), dataset.count
            s += f"{i}: "
        else:
            p, im0, frame = path, im0s.copy(), getattr(dataset, "frame", 0)

        p = Path(p)  # to Path
        save_path = str(save_dir / p.name)  # im.jpg
        txt_path = str(save_dir / "labels" / p.stem) + ("" if dataset.mode == "image" else f"_{frame}")  # im.txt
        s += "%gx%g " % im.shape[2:]  # print string
        gn = torch.tensor(im0.shape)[[1, 0, 1, 0]]  # normalization gain whwh
        imc = im0.copy() if save_crop else im0  # for save_crop
        annotator = Annotator(im0, line_width=line_thickness, example=str(names))
        if len(det):
            # Rescale boxes from img_size to im0 size
            det[:, :4] = scale_boxes(im.shape[2:], det[:, :4], im0.shape).round()

            # Print results
            for c in det[:, 5].unique():
                n = (det[:, 5] == c).sum()  # detections per class
                s += f"{n} {names[int(c)]}{'s' * (n > 1)}, "  # add to string

            # Write results
            for *xyxy, conf, cls in reversed(det):
                c = int(cls)  # integer class
                label = names[c] if hide_conf else f"{names[c]}"
                confidence = float(conf)
                confidence_str = f"{confidence:.2f}"

                if save_csv:
                    write_to_csv(p.name, label, confidence_str)

                if save_txt:  # Write to file
                    xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist()  # normalized xywh
                    line = (cls, *xywh, conf) if save_conf else (cls, *xywh)  # label format
                    with open(f"{txt_path}.txt", "a") as f:
                        f.write(("%g " * len(line)).rstrip() % line + "\n")

                if save_img or save_crop or view_img:  # Add bbox to image
                    c = int(cls)  # integer class
                    label = None if hide_labels else (names[c] if hide_conf else f"{names[c]} {conf:.2f}")
                    annotator.box_label(xyxy, label, color=colors(c, True))
                if save_crop:
                    save_one_box(xyxy, imc, file=save_dir / "crops" / names[c] / f"{p.stem}.jpg", BGR=True)

        # Stream results
        im0 = annotator.result()
        if view_img:
            if platform.system() == "Linux" and p not in windows:
                windows.append(p)
                cv2.namedWindow(str(p), cv2.WINDOW_NORMAL | cv2.WINDOW_KEEPRATIO)  # allow window resize (Linux)
                cv2.resizeWindow(str(p), im0.shape[1], im0.shape[0])
            cv2.imshow(str(p), im0)
            cv2.waitKey(1)  # 1 millisecond

        # Save results (image with detections)
        if save_img:
            if dataset.mode == "image":
                cv2.imwrite(save_path, im0)
            else:  # 'video' or 'stream'
                if vid_path[i] != save_path:  # new video
                    vid_path[i] = save_path
                    if isinstance(vid_writer[i], cv2.VideoWriter):
                        vid_writer[i].release()  # release previous video writer
                    if vid_cap:  # video
                        #fps = vid_cap.get(cv2.CAP_PROP_FPS)
                        fps = 5 
                        w = int(vid_cap.get(cv2.CAP_PROP_FRAME_WIDTH))
                        h = int(vid_cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
                    else:  # stream
                        fps, w, h = 5, im0.shape[1], im0.shape[0]
                    save_path = str(Path(save_path).with_suffix(".mp4"))  # force *.mp4 suffix on results videos
                    vid_writer[i] = cv2.VideoWriter(save_path, cv2.VideoWriter_fourcc(*"mp4v"), fps, (w, h))
                vid_writer[i].write(im0)

    # Print time (inference-only)
    LOGGER.info(f"{s}{'' if len(det) else '(no detections), '}{dt[1].dt * 1E3:.1f}ms")

# Print results
t = tuple(x.t / seen * 1e3 for x in dt)  # speeds per image
LOGGER.info(f"Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {(1, 3, *imgsz)}" % t)
if save_txt or save_img:
    s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ""
    LOGGER.info(f"Results saved to {colorstr('bold', save_dir)}{s}")
if update:
    strip_optimizer(weights[0])  # update model (to fix SourceChangeWarning)

def parse_opt(): """Parses command-line arguments for YOLOv5 detection, setting inference options and model configurations.""" parser = argparse.ArgumentParser() parser.add_argument("--weights", nargs="+", type=str, default=ROOT / "dataset/all.pt", help="model path or triton URL") parser.add_argument("--source", type=str, default=ROOT / "dataset/valid/images", help="file/dir/URL/glob/screen/0(webcam)") parser.add_argument("--data", type=str, default=ROOT / "dataset/all_data.yaml", help="(optional) dataset.yaml path") parser.add_argument("--imgsz", "--img", "--img-size", nargs="+", type=int, default=[320], help="inference size h,w") parser.add_argument("--conf-thres", type=float, default=0.65, help="confidence threshold") parser.add_argument("--iou-thres", type=float, default=0.81, help="NMS IoU threshold") parser.add_argument("--max-det", type=int, default=1000, help="maximum detections per image") parser.add_argument("--device", default="", help="cuda device, i.e. 0 or 0,1,2,3 or cpu") parser.add_argument("--view-img", action="store_true", help="show results") parser.add_argument("--save-txt", action="store_true", help="save results to .txt") parser.add_argument("--save-csv", action="store_true", help="save results in CSV format") parser.add_argument("--save-conf", action="store_true", help="save confidences in --save-txt labels") parser.add_argument("--save-crop", action="store_true", help="save cropped prediction boxes") parser.add_argument("--nosave", action="store_true", help="do not save images/videos") parser.add_argument("--classes", nargs="+", type=int, help="filter by class: --classes 0, or --classes 0 2 3") parser.add_argument("--agnostic-nms", action="store_true", help="class-agnostic NMS") parser.add_argument("--augment", action="store_true", help="augmented inference") parser.add_argument("--visualize", action="store_true", help="visualize features") parser.add_argument("--update", action="store_true", help="update all models") parser.add_argument("--project", default=ROOT / "runs/detect", help="save results to project/name") parser.add_argument("--name", default="exp", help="save results to project/name") parser.add_argument("--exist-ok", action="store_true", help="existing project/name ok, do not increment") parser.add_argument("--line-thickness", default=3, type=int, help="bounding box thickness (pixels)") parser.add_argument("--hide-labels", default=False, action="store_true", help="hide labels") parser.add_argument("--hide-conf", default=False, action="store_true", help="hide confidences") parser.add_argument("--half", action="store_true", help="use FP16 half-precision inference") parser.add_argument("--dnn", action="store_true", help="use OpenCV DNN for ONNX inference") parser.add_argument("--vid-stride", type=int, default=10, help="video frame-rate stride") opt = parser.parse_args() opt.imgsz = 2 if len(opt.imgsz) == 1 else 1 # expand print_args(vars(opt)) return opt

def main(opt): """Executes YOLOv5 model inference with given options, checking requirements before running the model.""" check_requirements(ROOT / "requirements.txt", exclude=("tensorboard", "thop")) run(**vars(opt))

if name == "main": opt = parse_opt() main(opt)

Additional

No response

github-actions[bot] commented 5 months ago

πŸ‘‹ Hello @jjang-yu, thank you for your interest in YOLOv5 πŸš€! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a πŸ› Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 πŸš€

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 πŸš€!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 5 months ago

@jjang-yu hello there! πŸ‘‹ It sounds like you're encountering issues with detecting overlapping objects with YOLOv5. Detecting overlapping objects can be challenging, as the model might struggle to identify objects that are not clearly separated or have significant overlap.

However, there are a couple of strategies you might consider to improve detection of overlapping objects:

  1. Adjusting the NMS IOU threshold: You mentioned configuring the iou_thres at 0.81. Lowering this threshold might help in improving the separation of overlapping detections. Experiment with lower values, e.g., 0.5 to 0.7, to see if this helps in your case.

  2. Data Augmentation: Ensuring that your dataset contains examples of overlapping objects, which would guide the model to learn to detect such cases during training. Augmenting your training data with images that simulate overlap might be beneficial.

  3. Alternative Architectures or Models: While YOLOv5 is quite robust, sometimes testing with different models or newer versions can yield better results for specific scenarios such as overlapping objects. If possible, experiment with other architectures or even consider custom modifications to your current model.

  4. Fine-tuning: If the overlapping objects are of classes that are already well represented in the pre-trained models but still not detected well when overlapped, consider fine-tuning your model with more annotated data of such overlapping cases.

Remember, modifying detection thresholds and retraining may require balancing between detecting overlaps vs. increasing false positives. Testing and iteration will be key to finding the optimal configuration for your specific use case.

Good luck, and I hope this helps!

jjang-yu commented 5 months ago

Thank you!πŸ™Œ I tried the first way but it didn't work. Can you tell me in more detail how to do the rest of the 2~4 methods?πŸ™

glenn-jocher commented 5 months ago

Hello @jjang-yu! I'm here to help. Let's dive into methods 2 to 4 with some brief guidance:

  1. Data Augmentation: Augmenting your data is about creating additional training data from your existing dataset. Use techniques like flipping, scaling, cropping, and adding noise to images in your dataset. This can be done using data augmentation libraries or custom scripts. The key is to simulate real-world scenarios where objects overlap.

  2. Alternative Architectures or Models: Exploring other architectures can be as simple as checking the latest releases on the YOLOv5 GitHub repository for any new models or updates. Also, consider exploring other detection models like YOLOv4 or Faster R-CNN for comparison.

  3. Fine-tuning: Start by preparing a dataset with many examples of the overlapping objects you're interested in detecting. Then, continue training (fine-tuning) your model on this dataset. This can be initiated with a command similar to:

    python train.py --img 640 --batch 16 --epochs 100 --data your_dataset.yaml --weights yolov5s.pt

    Adjust the --img, --batch, --epochs, --data, and --weights parameters as necessary for your specific scenario.

Remember, the process of improving detection, especially in challenging situations like overlapping objects, often involves trial and error. Keep experimenting and fine-tuning your approach. Good luck! πŸš€

jjang-yu commented 5 months ago

okayπŸ™†β€β™€οΈ, Thank you for your kind reply. Have a nice day :)

glenn-jocher commented 5 months ago

@jjang-yu you're welcome! 😊 If you have further questions down the road, feel free to ask. Have a wonderful day too! 🌟

github-actions[bot] commented 4 months ago

πŸ‘‹ Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO πŸš€ and Vision AI ⭐