dbolya / yolact

A simple, fully convolutional model for real-time instance segmentation.
MIT License
5.02k stars 1.32k forks source link

How to get mask? #642

Open Wickked-swag opened 3 years ago

Wickked-swag commented 3 years ago

I just want to get the pic of mask.Just like a pic that the foreground is black and the background is white.Can anybody tell me how to do it?Thanks very much!

kidpaul94 commented 3 years ago

https://github.com/dbolya/yolact/blob/57b8f2d95e62e2e649b382f516ab41f949b57239/eval.py#L159 You can export mask from here (It's already in numpy array). For example, you can do like the following: np.save('mask_data', masks.cpu().numpy())

7eta commented 3 years ago

@kidpaul94 Thanks! Do you know that how to convert numpy of mask_data to segmentation data([x, y, x, y, x, y ...])?

kidpaul94 commented 3 years ago

numpy of mask has 3-dimension (instance, x, y). So, if you want to pull out 1st instance, for example:

mask = np.load('./mask_data.npy') solutions = np.argwhere(mask[0] != 0)

This will give you the segmentation data.

7eta commented 3 years ago

@kidpaul94 Thanks!!, I will try soon.

MiguelAngeloMartinsRibeiro commented 3 years ago

@kidpaul94 I tried and it gave me an npy file like it was supposed too. So I'm running a whole folder of images and my ultimate gola is to get the segmentation masks in a png format so that I can compare with the background truth ones. Do you know how to do it?

kidpaul94 commented 3 years ago

I'm not exactly sure what you mean by "background truth". Are you talking about image input without any mask?

MiguelAngeloMartinsRibeiro commented 3 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc.

So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this

Captura de ecrã 2021-08-09, às 01 58 25

But I would like to get them like this

Image01921_boat_0

kidpaul94 commented 3 years ago

The npy file contains pixel locations of the object you detected in 640x480 image. Can't you just generate 640x480 black background and convert the pixels corresponding to the npy file into white to get the specific format?

thierrydecae commented 3 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc.

So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this

Captura de ecrã 2021-08-09, às 01 58 25

But I would like to get them like this

Image01921_boat_0

Hi Miguel, First of all many thanks to you and kidpaul94, i found your comments useful. I'm trying to do something which I believe is similar to what you are suggesting: detect objects and extract them to a white background using the mask. Did you manage to do what you were trying to do? Best Regards

MiguelAngeloMartinsRibeiro commented 2 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc. So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this

Captura de ecrã 2021-08-09, às 01 58 25

But I would like to get them like this Image01921_boat_0

Hi Miguel, First of all many thanks to you and kidpaul94, i found your comments useful. I'm trying to do something which I believe is similar to what you are suggesting: detect objects and extract them to a white background using the mask. Did you manage to do what you were trying to do? Best Regards

Yes I'm able, do you still need it?

bharathraja commented 2 years ago

can you post your solution @MiguelAngeloMartinsRibeiro ?

for me commenting out image loading into GPU, and instead initializing empty images gives colored masks output without original background image.

if undo_transform:
        img_numpy = undo_image_transformation(img, w, h)
        #img_gpu = torch.Tensor(img_numpy).cuda()
        img_gpu = torch.zeros(img.shape).cuda()  # initializing background as black
else:
        #img_gpu = img / 255.0
        img_gpu = torch.zeros(img.shape).cuda()  # initializing background as black
        h, w, _ = img.shape
MiguelAngeloMartinsRibeiro commented 2 years ago

@bharathraja I use a modified eval.py script that I found somewhere, don't really know where anymore

https://github.com/MiguelAngeloMartinsRibeiro/YolactMiguel but you can find it here

zzl-gray commented 2 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc.

So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this Captura de ecrã 2021-08-09, às 01 58 25

But I would like to get them like this

Image01921_boat_0

Hello,Miguel! I'm trying to do something which is similar to what you are suggesting. May I refer to your solution,I'll appreciate your help very much!

MiguelAngeloMartinsRibeiro commented 2 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc. So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this Captura de ecrã 2021-08-09, às 01 58 25 But I would like to get them like this Image01921_boat_0

Hello,Miguel! I'm trying to do something which is similar to what you are suggesting. May I refer to your solution,I'll appreciate your help very much!

My solution is in the GitHub link above your answer

MiguelAngeloMartinsRibeiro commented 2 years ago

With that new eval.py you just need to run it as dbolya says (https://github.com/MiguelAngeloMartinsRibeiro/YolactMiguel)

python eval.py --trained_model=weights/yolact_base_54_800000.pth --score_threshold=0.15 --top_k=15 --images=path/to/input/folder:path/to/output/folder

and you'll get a folder with the masks

From there you just need to compute the IoU by comparing the masks with the GT

Good luck @zzl-gray

wafaer commented 2 years ago

I'm sorry @kidpaul94 I'll try to explain it better. So my goal is to compare a png segmentation mask obtained by the output of yolact with the ground truth mask (we can call it the correct segmentation mask) that I used to make the coco annotations. I want to compare them to compute metrics like IoU etc. So I'm testing yolact in a folder of 200 images and I'm getting the segmentation results like this Captura de ecrã 2021-08-09, às 01 58 25 But I would like to get them like this Image01921_boat_0

Hello,Miguel! I'm trying to do something which is similar to what you are suggesting. May I refer to your solution,I'll appreciate your help very much!

My solution is in the GitHub link above your answer

I want to output the segmentation area of the image person is white, and other classifications are black, how to do it.

hafizur-r commented 2 years ago

@MiguelAngeloMartinsRibeiro is it possible to get each mask separately of the detected object with the original image size? I am trying to extract each object's length, width, and mask area. If I get each mask with color or binary image, I can pull that data.

hafizur-r commented 2 years ago

@MiguelAngeloMartinsRibeiro, your solution link is not activated any more. Can you please share your solution again? TIA

MiguelAngeloMartinsRibeiro commented 2 years ago

@MiguelAngeloMartinsRibeiro, your solution link is not activated any more. Can you please share your solution again? TIA

@hafizur-r https://github.com/MiguelAngeloMartinsRibeiro/Real-Time-Ship-Instance-Segmentation-with-3D-fully-connected-CRF-for-Maritime-Surveillance/blob/main/Yolact%2B%2B/eval_images_yolact.py see if this one is working

hafizur-r commented 2 years ago

@MiguelAngeloMartinsRibeiro Thank you! I got it from another post.

BinZhou-23 commented 1 year ago

@MiguelAngeloMartinsRibeiro,您的解决方案链接不再激活。你能再分享一下你的解决方案吗?TIA

@hafizur-r https://github.com/MiguelAngeloMartinsRibeiro/Real-Time-Ship-Instance-Segmentation-with-3D-fully-connected-CRF-for-Maritime-Surveillance/blob/main/Yolact%2B%2B/eval_images_yolact.py参见如果这个有效

hi, i try to get the mask with this new eval.py but i can't get anything after ran it. if anyone has the same problem? or if there is ant other ways to get the mask