hollance / MobileNet-CoreML

The MobileNet neural network using Apple's new CoreML framework
705 stars 105 forks source link

Getting the output from intermediate layers in the mobilenetv2 network #11

Open letdivedeep opened 3 years ago

letdivedeep commented 3 years ago

@hollance

I am trying to get the intermediate layer(add_node) output and merge it into the existing model outputs (confidences and coordinates)

Experiments :

with this the coreml model is created as attached below.

Screenshot 2021-05-21 at 10 40 11 AM

But while loading it on through python it gives

an error saying: RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "Error reading protobuf spec. validator error: Pipeline: Input 'confidence' of model 'CoreML.Specification.ModelDescription' does not match the type previously specified by the pipeline input or the output of a previous model.".

predict_error

I am assuming this error is prompted bcoz the new model created is taking the input, not from the intermediate layer. Can we add a dummy node in the NMS model to bypass this ... any thoughts on this will be helpful or even is this the correct way of doing it.

I have attached the coreml and convert pythod code used

Archive.zip

hollance commented 3 years ago

You don't need to make another model. It should be sufficient to just create a new output on the SSD model, then make an output with the same name in the pipeline model. You don't need to pass this output through the other models.

letdivedeep commented 3 years ago

Kudos

@hollance. Thanks for your reply.

I tried the above approach, it worked.