ryouchinsa / Rectlabel-support

RectLabel is an offline image annotation tool for object detection and segmentation.
https://rectlabel.com
515 stars 73 forks source link

How can I use Load Core ML model #169

Closed caichunjian520 closed 4 years ago

caichunjian520 commented 4 years ago

My mlmodel is converted from YoloV3, when I select Load Core ML model, one alert pop up and says "The Yolov3 network from the paper 'Yolov3: An Incremental Imporvement'", I don't know what does it mean. As I follow the instructions I know use this function may remove all annotations. So I move all no annotations images to a new folder and process all images with Core ML. The app starts tp process all images first to last one, but when I open the annotation folder, it is still empty.

And I suggest it is better to pop up an alert if the process will override the current annotation files(for example: resize all images or process all images with Core ML). If someone didn't read the instructions and go to have a try, this may destroy their several months work.

By the way, RectLabel is awesome, it saves me a lot of time. I have been use it for 3 weeks(210 hours) and labeled 5500 images. I will keep using this app for next five months.

ryouchinsa commented 4 years ago

Thanks for writing the issue.

when I select Load Core ML model, one alert pop up

We are showing the description in metadata of the Core ML model. https://rectlabel.com/help#load_coreml

スクリーンショット 2020-07-07 16 12 14

I know use this function may remove all annotations.

If your images folder has annotations, we back up those annotations to another folder. https://rectlabel.com/help#process_all_images_coreml

スクリーンショット 2020-07-07 16 12 22

My mlmodel is converted from YoloV3

How did you convert from YoloV3?

We assume that the output layer of the Core Ml model has coordinates array and confidence array. https://rectlabel.com/help#process_image_coreml

If you trained on TuriCreate or Create ML, you can use the Core ML model as it is on RectLabel.

VNCoreMLFeatureValueObservation *coordinatesObservation  = results[0];
VNCoreMLFeatureValueObservation *confidenceObservation = results[1];

The app starts tp process all images first to last one, but when I open the annotation folder, it is still empty.

If you could send the .mlmodel to support@rectlabel.com, we can debug the output layer is correct or not.

By the way, RectLabel is awesome, it saves me a lot of time.

Thanks for using RectLabel. We appreciate your feedback.

ryouchinsa commented 4 years ago

We checked the YOLOv3.mlmodel and YOLOv3Tiny.mlmodel downloaded from Apple website are working on RectLabel. https://developer.apple.com/machine-learning/models/

caichunjian520 commented 4 years ago

@ryouchinsa Thanks for reply, I just test the YOLOv3Tiny.mlmodel from Apple website and works fine. The model I trained using darknet, and convert to .h5 file then to mlmodel. It works well in iOS project with live camera. The mlmodel file will be used in my app for sale so it is not available to send to your email. But I will find another mlmodel soon from github which is free.

ryouchinsa commented 4 years ago

Thanks for the details.

It works well in iOS project with live camera.

You mean this code can run your mlmodel? https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture?language=objc

But I will find another mlmodel soon from github which is free.

Thank you.

ryouchinsa commented 4 years ago

The new update version 3.02.5 was released. To show the new update on Mac App Store, press command + R to reload.

スクリーンショット 2020-07-30 6 14 36