This Pull requests adds a new node for universal segmentation models (MaskFormer, Mask2Former and OneFormer) to MLOPs. The node accepts a colored points network (or an image file path) and exports 3 outputs, a colored mask point network of the detected objects in the image, a corresponding primitive network, and the original image. All three outputs contain id, label and score attributes for the detected objects.
A complimentary node "MLOPs image segment" can extract detected objects by ID from either point output of the Universal segmentation node. It has two outputs, one translated, the other at the 0,0,0 position
The example HIP included shows how to use both nodes, including how to chain results from one model into another in order to refine the extraction
This Pull requests adds a new node for universal segmentation models (MaskFormer, Mask2Former and OneFormer) to MLOPs. The node accepts a colored points network (or an image file path) and exports 3 outputs, a colored mask point network of the detected objects in the image, a corresponding primitive network, and the original image. All three outputs contain id, label and score attributes for the detected objects.
A complimentary node "MLOPs image segment" can extract detected objects by ID from either point output of the Universal segmentation node. It has two outputs, one translated, the other at the 0,0,0 position
The example HIP included shows how to use both nodes, including how to chain results from one model into another in order to refine the extraction