ultralytics / yolov5

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
https://docs.ultralytics.com
GNU Affero General Public License v3.0
50.43k stars 16.27k forks source link

yolov5 can crop like instance segmentation? #11923

Closed guscldns closed 1 year ago

guscldns commented 1 year ago

Search before asking

Description

yolov5 can crop bbox. but i am want to crop only segment part Is it possible?

Use case

i am want to crop only segment part Is it possible?

Additional

No response

Are you willing to submit a PR?

github-actions[bot] commented 1 year ago

👋 Hello @guscldns, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide a minimum reproducible example to help us debug it.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset image examples and training logs, and verify you are following our Tips for Best Training Results.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

YOLOv5 CI

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training, validation, inference, export and benchmarks on macOS, Windows, and Ubuntu every 24 hours and on every commit.

Introducing YOLOv8 🚀

We're excited to announce the launch of our latest state-of-the-art (SOTA) object detection model for 2023 - YOLOv8 🚀!

Designed to be fast, accurate, and easy to use, YOLOv8 is an ideal choice for a wide range of object detection, image segmentation and image classification tasks. With YOLOv8, you'll be able to quickly and accurately detect objects in real-time, streamline your workflows, and achieve new levels of accuracy in your projects.

Check out our YOLOv8 Docs for details and get started with:

pip install ultralytics
glenn-jocher commented 1 year ago

@guscldns yes, YOLOv5 can crop bounding boxes to extract regions of interest in an image. However, it does not directly support instance segmentation for cropping only the segment part.

If you're interested in instance segmentation, I recommend checking out other models specifically designed for that task, such as Mask R-CNN or YOLACT. These models can effectively segment objects in images and provide accurate crops of the segmented parts.

Please let me know if you have any further questions or need any assistance with YOLOv5.

Norwalker commented 1 year ago

i have slove it

glenn-jocher commented 1 year ago

@Norwalker hi there!

I'm glad to hear that you were able to solve the issue you were facing with YOLOv5. If you have any further questions or need any assistance, feel free to ask. Our community is always here to help.

Best regards.

Norwalker commented 1 year ago

For segment task ,official code‘ results offer mask add original image bt,sometimes,we just need to get mask no original image in ultralytics.utils.plotting.Annotator class mask function add

gray = np.zeros((im_gpu.shape[0],im_gpu.shape[1],3),dtype=np.uint8)

gray = torch.tensor(gray,dtype=torch.float32,requires_grad=False)

imgpu=gray.cuda() * inv_alph_masks[-1] +mcs

will return mask no original image then run code :

  originalpath = cv2.imread(originalpath)

  maskpath = cv2.imread(r"maskpath",cv2.IMREAD_GRAYSCALE)

  image=cv2.bitwise_and(originalpath, originalpath, mask=maskpath)
glenn-jocher commented 1 year ago

@Norwalker the suggested code modification in the Annotator class will indeed return a mask without the original image. However, please note that the bitwise_and operation you provided for applying the mask to the original image may not work as expected.

Here is the corrected code:

originalpath = cv2.imread(originalpath)
maskpath = cv2.imread("maskpath", cv2.IMREAD_GRAYSCALE)

# Create a 3-channel mask with the same dimensions as the original image
mask = cv2.merge([maskpath, maskpath, maskpath])

# Apply the mask to the original image using the bitwise_and operation
result = cv2.bitwise_and(originalpath, mask)

With this code, the bitwise_and operation will correctly apply the 3-channel mask to the original image. Please note that the maskpath variable should be the file path or image array representing the grayscale mask.

If you have any further questions or need additional assistance, please feel free to ask.

github-actions[bot] commented 1 year ago

👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.

For additional resources and information, please see the links below:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLO 🚀 and Vision AI ⭐