open-mmlab / mmdetection

OpenMMLab Detection Toolbox and Benchmark
https://mmdetection.readthedocs.io
Apache License 2.0
29.45k stars 9.43k forks source link

When I use mask RCNN for instance segmentation, the colors of the masks of the same class of targets in the test results are not distinguished. How can I make the same class have different colors? #9489

Open neverstoplearn opened 1 year ago

RangiLyu commented 1 year ago

This feature has been supported in #8988. You can try the latest dev-3.x branch.

neverstoplearn commented 1 year ago

This feature has been supported in #8988. You can try the latest dev-3.x branch.

the old model I trained in master branch can not used in the dev-3.x branch?

got "File "/home/xxx/mmdetection/demo/image_demo.py", line 98, in main(args) File "/home/xxx/mmdetection/demo/image_demo.py", line 46, in main visualizer = VISUALIZERS.build(model.cfg.visualizer) File "/home/xxx/anaconda3/envs/torch110/lib/python3.9/site-packages/mmengine/config/config.py", line 808, in getattr return getattr(self._cfg_dict, name) File "/home/xxx/anaconda3/envs/torch110/lib/python3.9/site-packages/mmengine/config/config.py", line 53, in getattr raise AttributeError(f"'{self.class.name}' object has no " AttributeError: 'ConfigDict' object has no attribute 'visualizer' " error. how can I fix it? trained the model again. or anyway? like change some code in main branch? thanks.

ZwwWayne commented 1 year ago

Yes, you cannot use it in the master branch. You can try to copy the code.

MjdMahasneh commented 1 year ago

i don't know if i understand correctly, but you can have control on the colors any way you want, e.g.,

from mmengine.visualization import Visualizer
import mmcv
from mmdet.apis import init_detector, inference_detector
import glob
import numpy as np

image = mmcv.imread('./ballondatasets/balloon/train/120853323_d4788431b9_b.jpg',channel_order='rgb')

checkpoint_file = glob.glob('./work_dir/epoch_50.pth')[0]

## get results from model 
model = init_detector(cfg, checkpoint_file, device='cuda:0')
new_result = inference_detector(model, img)
# print('new_result', new_result)

## extract boxes, masks, scores, and lables :
pred_bboxes = new_result.pred_instances.bboxes
pred_labels = new_result.pred_instances.labels
pred_scores = new_result.pred_instances.scores
pred_masks = new_result.pred_instances.masks

## lets filter them using a threshold
confidence_threshold = 0.9
# Move tensors to CPU and convert to numpy
pred_scores_np = pred_scores.cpu().numpy()
# Identify the indices that satisfy the threshold
filtered_indices = np.where(pred_scores_np > confidence_threshold)[0]
# Use these indices to filter the predictions
filtered_bboxes = pred_bboxes[filtered_indices].cpu().numpy()
filtered_labels = pred_labels[filtered_indices].cpu().numpy()
filtered_scores = pred_scores_np[filtered_indices]
filtered_masks = pred_masks[filtered_indices].cpu().numpy()

visualizer = Visualizer(image=image)

# draw multiple bboxes
# single bbox formatted as [xyxy]
visualizer.draw_bboxes(filtered_bboxes, edge_colors='r',
                       line_widths=3, line_styles = '--')
## to draw a box formatted as [xyxy]
# visualizer.draw_bboxes(torch.tensor([[33, 120, 209, 220], [72, 13, 179, 147]]))

# visualizer.draw_binary_masks(new_result.pred_instances.masks) ## this also works
visualizer.draw_binary_masks(filtered_masks, colors=(255, 150, 50), alphas=0.35)

visualizer.show()