ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
49.69k stars 16.11k forks source link

I wonder if this feature is available or not #12085

Closed anewworl closed 1 year ago

anewworl commented 1 year ago

Search before asking

Question

                    if save_img or save_crop or view_img:  # Add bbox to image
                        c = int(cls)  # integer class
                        label = None if hide_labels else (names[c] if hide_conf else f'{names[c]} {conf:.2f}')
                        annotator.box_label(xyxy, label, color=colors(c, True))
                        # annotator.draw.polygon(segments[j], outline=colors(c, True), width=3)
                    if save_crop:
                        save_one_box(xyxy, imc, file=save_dir / 'crops' / names[c] / f'{p.stem}.jpg', BGR=True)

I wonder if the line annotator.draw.polygon(segments[j], outline=colors(c, True), width=3) from yolov5/segment/predictpy.py is avalialbe or not because when i testing to run it with camera. It show like below:

  File "D:\HOC_TAP\YASKAWA\YOLO_PyCharm\3.Training\test_seg.py", line 161, in run
    annotator.draw.polygon(segments[j], outline=colors(c, True), width=3)
AttributeError: 'Annotator' object has no attribute 'draw'

If this feature is not avalialbe is there any replacement for this line. Another thing is that if this function will give a polygone outline for and object like picture below which is the example from yolov8:

z4655672072479_061dcd4cbc2b24729999120219836617

Additional

No response

github-actions[bot] commented 1 year ago

👋 Hello @anewworl, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.8.0 with all requirements.txt installed including PyTorch>=1.8. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 1 year ago

@anewworl the line annotator.draw.polygon(segments[j], outline=colors(c, True), width=3) in yolov5/segment/predictpy.py is not currently available. The error you encountered, AttributeError: 'Annotator' object has no attribute 'draw', suggests that the draw attribute does not exist in the Annotator object.

If you are looking for a replacement for this line, you can try using the cv2.polylines() function in OpenCV to draw the polygon outline. Here is an example using the cv2.polylines() function:

import cv2

# Assuming `segments[j]` is a list of points representing the polygon
# `image` is the image you want to draw on
# `color` is the color of the outline
# `width` is the width of the outline
cv2.polylines(image, [np.array(segments[j], dtype=np.int32)], isClosed=True, color=color, thickness=width)

Regarding the example image you provided from YOLOv8, you can achieve similar results with YOLOv5 by using the cv2.polylines() function as described above. The example image you provided illustrates an object with a polygon outline drawn around it, which can be achieved using the suggested code snippet.

Let me know if you have any further questions or need additional assistance!

anewworl commented 1 year ago

hm interesting. Firsly thanks for your response @glenn-jocher. so as we know segments[j] here is just a list of points reprensting for 1 polygon of 1 object. So how it come with the multiple segment, [np.array(segments[j], dtype=np.int32)] will change to what or it will stay the same. Futhermore i seen that above that line is

segments = [
                    scale_segments(im0.shape if retina_masks else im.shape[2:], x, im0.shape, normalize=True)
                    for x in reversed(masks2segments(masks))]

which have normalize=True which will return float number do we need to reverse the normalize of this steps befor we but the segments in the cv2.polylines()

glenn-jocher commented 1 year ago

@anewworl segments[j] represents a list of points for one polygon of an object. If you have multiple polygons or objects, you would have multiple segments lists, where each segments[j] corresponds to a different object.

When using the cv2.polylines() function, the segments[j] list should be converted to a NumPy array of dtype=np.int32. The np.array(segments[j], dtype=np.int32) expression accomplishes this conversion.

Regarding the normalization, the normalize=True argument in scale_segments() is used to normalize the segment coordinates within the image shape. If you want to reverse this normalization before using the segments in cv2.polylines(), you can modify the code to apply the reverse normalization before drawing the polygon. You can do this by multiplying the segments by the corresponding shape dimension.

For example, if segments is normalized with normalize=True, you can reverse the normalization as follows:

segments = [
    np.multiply(x, [im0.shape[1], im0.shape[0]])  # Reverse normalization
    for x in segments
]

This will multiply each coordinate in segments by the width and height of im0 to obtain the original pixel values.

I hope this explanation clarifies your question. Let me know if there's anything else I can assist you with!

anewworl commented 1 year ago

@glenn-jocher thanks for your advice, it help me a lot.

glenn-jocher commented 1 year ago

@anewworl, you're welcome! I'm glad I could help. If you have any more questions or need further assistance, feel free to ask. Have a great day!