openvinotoolkit / open_model_zoo

Pre-trained Deep Learning models and demos (high quality and extremely fast)
https://docs.openvino.ai/latest/model_zoo.html
Apache License 2.0
4.07k stars 1.36k forks source link

Facing Unsupported Layer issues in Facial Land Mark Detection with 98 model #3499

Open joviyal-arun opened 2 years ago

joviyal-arun commented 2 years ago

Hai . I am doing facial land mark detection using the model of facial-landmarks-98-detection-0001 I am facing issues with unsupported layer line unsupported_layers = [l for l in ngraph_func.get_ordered_ops() if l.get_friendly_name() not in supported_layers] with version openvino 2022 tool kit .I am downloading the model from https://github.com/openvinotoolkit/open_model_zoo/blob/master/models/intel/facial-landmarks-98-detection-0001/model.yml

eaidova commented 2 years ago

Hi @joviyal-arun, could you provide more details about problem? Which device do you use for model loading? If it is possible, could you please provide full log of error and/or the code which you use for running?

joviyal-arun commented 2 years ago

Hi my system configuration is i3processorwith 8 GB Ram. Python 3.6

CODE

import cv2 import numpy as np from openvino.inference_engine import IENetwork, IECore import os import ngraph as ng

class FacialLandmarkDetection: ''' Class for the Facial Landmark Detection Model. ''' def init(self, model_name, device='CPU', extensions=None):

    model_xml, model_bin = str(model_name),str(os.path.splitext(model_name)[0] + ".bin")
    self.core = IECore()
    self.device=device
    self.facial_landmark_model = self.core.read_network(model=model_xml, weights = model_bin)
    self.input_blob = next(iter(self.facial_landmark_model.input_info))
    print('self.input_blob',self.input_blob)
    self.out_blob = next(iter(self.facial_landmark_model.outputs))
    print('out_blob',self.out_blob)

input_info

def load_model(self):

    self.exec_net = self.core.load_network(network=self.facial_landmark_model, device_name="CPU",num_requests=2)

    return self.exec_net

def sync_inference(self, image):
    input_blob = next(iter(self.exec_net.input_info))
    return self.exec_net.infer({input_blob: image})

def async_inference(self, image, request_id=0):
    # create async network
    input_blob = next(iter(self.exec_net.inputs))
    async_net = self.exec_net.start_async(request_id, inputs={input_blob: image})

    # perform async inference
    output_blob = next(iter(async_net.outputs))
    status = async_net.requests[request_id].wait(-1)
    if status == 0:
        result = async_net.requests[request_id].outputs[output_blob]
    return result

def check_model_demo(self):

    supported_layer_map = self.core.query_network(network=self.facial_landmark_model, device_name="CPU")
    supported_layers = supported_layer_map.keys()

    unsupported_layer_exists = False
    network_layers = self.facial_landmark_model.layers.keys()
    for layer in network_layers:
        if layer in supported_layers:
            pass
        else:
            print("[INFO] {} Still Unsupported".format(layer))
            unsupported_layer_exists = True

    if unsupported_layer_exists:
        print("Exiting the program.")
        exit(1)
    else: 
        print("[INFO][Facial Landmark Detection Model] All layers are suported")

def check_model(self):

    if "CPU" in self.device:

        ngraph_func = ng.function_from_cnn(self.facial_landmark_model)

        **supported_layers = self.core.query_network(network=self.facial_landmark_model, device_name=self.device)**

print('supported_layers',len(supported_layers))

        **unsupported_layers = [l for l in ngraph_func.get_ordered_ops() if l.get_friendly_name() not in supported_layers]**

print('unsupported_layers',len(unsupported_layers))

        print(len(unsupported_layers),'length ----------------')

        if len(unsupported_layers) != 0:
            print("Unsupported layers found: {}".format(unsupported_layers))
            print("Check whether extensions are available to add to IECore.")
            exit(1)

def preprocess_input(self, image):

n, c, h, w = self.facial_landmark_model.inputs[self.input_blob].shape

    n, c, h, w=self.facial_landmark_model.input_info[self.input_blob].tensor_desc.dims

    image = cv2.resize(image, (w, h))
    image = image.transpose(2,0,1)
    image = image.reshape(1, *image.shape)
    return image

def preprocess_output(self, image, outputs):

    width = image.shape[1]
    height = image.shape[0]

    # shape (1x70)

facial_landmark = outputs['align_fc3'][0] # For 35 Land Mark Detection

facial_landmark = outputs['3851'][0,:,0,0]

    facial_landmark = outputs['3851'][0]

    print(type(facial_landmark))
    print(facial_landmark.shape)
    print(facial_landmark.ndim)

    # convert from [x0,y0,x1,y1,...,x34,y34] to [(x0,y0),(x1,y1),...,(x34,y34)] and scale to input size

    j = 0
    landmark_points = []
    for i in range(int(len(facial_landmark)/2)):
        point = (int(facial_landmark[j]*width), int(facial_landmark[j+1]*height))
        landmark_points.append(point)
        j += 2

    left_eye_coord = [(landmark_points[12][0], landmark_points[13][1]), (landmark_points[14][0], landmark_points[0][1]+30)]
    right_eye_coord = [(landmark_points[15][0], landmark_points[16][1]), (landmark_points[17][0], landmark_points[2][1]+30)]

    return landmark_points, left_eye_coord, right_eye_coord

I am getting unsupported layer length =512 but i require unsupported layer length =0. if unsupported layer length is 0 i will get proper prediction..i dont know whether my code is error or the model .xml , bin file is problem..?

saurabhmj11 commented 1 year ago

The issue with unsupported layers can occur when the version of OpenVINO Toolkit that you are using is not compatible with the model you are using. The facial-landmarks-98-detection-0001 model is part of the Open Model Zoo, and it is possible that the model was created using an earlier version of the OpenVINO Toolkit that is not compatible with the version you are currently using.

To resolve this issue, you can try updating to a more recent version of the OpenVINO Toolkit, as newer versions typically support a wider range of layers. You can also try to convert the model to a format that is compatible with your current version of OpenVINO using the Model Optimizer tool.

Here are the steps to follow:

Download the latest version of the OpenVINO Toolkit from the official website.

Install the OpenVINO Toolkit and ensure that all dependencies are properly installed.

Download the facial-landmarks-98-detection-0001 model from the Open Model Zoo.

Use the Model Optimizer tool to convert the model to a format that is compatible with your version of OpenVINO. Here is an example command for converting the model to an Intermediate Representation (IR) format: python /mo.py --input_model /facial-landmarks-98-detection-0001.xml --output_dir /IR_models/ --data_type FP16 In this command, replace with the path to the facial-landmarks-98-detection-0001 model and the location where you want to save the converted model.

Once the conversion is complete, use the converted model in your facial landmark detection application.

If you still face issues with unsupported layers, you can check the OpenVINO Toolkit documentation to see the list of supported layers for each version and compare it with the list of layers used in the facial-landmarks-98-detection-0001 model.