hollance / YOLO-CoreML-MPSNNGraph

Tiny YOLO for iOS implemented using CoreML but also using the new MPS graph API.
MIT License
937 stars 253 forks source link

Shape of the converted model is wrong #63

Open devlakshmi opened 4 years ago

devlakshmi commented 4 years ago

@hollance I tried converting the yolo weights into coreml following the detailed steps but the converted model the shape of the output is wrong . Is there is a way to resolve this issue. Please explain why is it happening

1
hollance commented 4 years ago

I don't think you can use this repo to convert YOLOv3 models.

devlakshmi commented 4 years ago

@hollance i didnt use your repo , i used keras yolo v3 repo for converting, i was following ur coreml ios app sample, then i reliazed that my converted model is wrong , i thought i ll raise a query to ask you whether you have an idea why this issue occured? or whether have you faced this issue ?

hollance commented 4 years ago

I don't know, but sometimes the conversion to Core ML does not fill in those output shapes correctly. What is the shape of these outputs when you actually run the model, i.e. what does print(output1.shape) say?

devlakshmi commented 4 years ago

@hollance

Screenshot 2020-02-23 at 8 06 19 PM
hollance commented 4 years ago

That is not what I asked for (it's the same information as shown in Xcode). Instead, I'd need to see the shape of the output when you actually run the model on an image (either in Python or in iOS / macOS).

devlakshmi commented 4 years ago
Screenshot 2020-02-23 at 8 13 23 PM
hollance commented 4 years ago

That's what I thought: the shapes are correct. It's just that the converter didn't fill them in right in the mlmodel file. You should be able to use the model as it is.

devlakshmi commented 4 years ago

Ok thank you

devlakshmi commented 4 years ago

@hollance There is issue , i was checking it with already sample coreml model of yolo weights?the output shape is as follows

Screenshot 2020-02-25 at 3 23 24 PM

but my converted output shape is as

Screenshot 2020-02-23 at 8 13 23 PM

this is cuasing a issue while computing bounding boxes

hollance commented 4 years ago

Ah yes, that makes sense.

You can fix this by adding the exact output shape to the mlmodel file (you can read how to do this in my ebook Core ML Survival Guide).

Or you can change the stride indices in these lines (note that I've already made the change below):

    let channelStride = features.strides[2].intValue
    let yStride = features.strides[3].intValue
    let xStride = features.strides[4].intValue

Also make sure you change these:

    let boxesPerCell = 3
    let numClasses = 2

I think that's the configuration you're using because 3*(2 + 5) = 21.

NigelJu commented 4 years ago

Hi, I've encounter same question about different output shape. I have try to fixed the situation by using your code, but in vain. I use same image as input. I try to print information, this is first info : tx, ty, tw, th, tc 0.5126953 1.0888672 0.51464844 -0.5336914 -15.3203125 // in your original projection -17.953125 4.1484375 4.28125 -11.8203125 -403.0 // my mlModel(output1 is 1x1x255x13x13, and updateCode)

Code is not crash, but prediction is error.(error boxRect, error label)

problem solved.

I forget "image_scale=1/255." when I convert h5 to model

so: 1: mlModel = coremltools.converters.keras.convert(YOUR_h5_FILE_PATH,input_names='image',image_input_names='image',input_name_shape_dict={'image': [None, 416, 416, 3]},image_scale=1/255.) 2: update YOLO.swift - computeBoundingBoxes function let channelStride = features.strides[2].intValue let yStride = features.strides[3].intValue let xStride = features.strides[4].intValue

and you can run projection done.