vietanhdev / anylabeling

Effortless AI-assisted data labeling with AI support from YOLO, Segment Anything (SAM+SAM2), MobileSAM!!
https://anylabeling.nrl.ai
GNU General Public License v3.0
2.36k stars 246 forks source link

Load Custom Model #39

Open qqqhhh-any opened 1 year ago

qqqhhh-any commented 1 year ago

I have a yolov5 model which trained in a custom dataset,I want to load it in the type of torchscript and label the rest of dataset.It seems that only standard yolov5/v8/SAM model can be loaded now

hdnh2006 commented 1 year ago

This is a feature that I'll try to add as soon as I can, however this is already possible doing some little changes in the code. You must follow the next steps:

1) Export the model using YOLOv5 or YOLOv8 repository:

For YOLOv5:

python export.py --weights yourmodel.pt --include onnx --opset 12

For YOLOv8:

yolo export model=yourmodel.pt format=onnx opset=12

It is important here to set opset=12

2) Put yourmodel.onnx in the correct path

Here is an example for a path in Ubuntu. Go to: /home/youruser/anylabeling_data/, and create a folder yourmodel_name and copy there yourmodel.onnx

3) Create a yourmodel.yaml file of your model

Go to anylabeling/anylabeling/configs/auto_labeling and change the default .yaml YOLOv5 or YOLOv8 model according to your case, here is an example for YOLOv5 model:

type: yolov5
name: yourmodel_name
display_name: yourmodel_name
model_path: https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.0.1/yourmodel.onnx
input_width: 640
input_height: 640
score_threshold: 0.5
nms_threshold: 0.45
confidence_threshold: 0.45
classes:
  - your_class1
  - your_class2
  -  ...

It is important here to provide the information about if your model is YOLOv5 or YOLOv8 model. The value model_path in your .yaml file must be changed at the end of the url.

4) Change model.yaml file so yourmodel_name will be listed:

Go to anylabeling/anylabeling/configs/auto_labeling and open models.yaml file and add your model at the end of the file:

...
- model_name: "yourmodel_name"
  config_file: "yourmodel.yaml"

It is important here that you keep in mind the same values for yourmodel_name and yourmodel.yaml

Let me know if you need some extra help.

qqqhhh-any commented 1 year ago

I followed the instructions and loaded my model successfully,it works very well,Thanks! But we can go further,We can design an interface to guide users to load their own models and then automatically generate corresponding yamls. In addition, I have an idea about the integrated model training process. Usually we train the model on a remote server. Take yolov5 for example, its training process is relatively fixed. We can package the labeled datasets and upload them to the server as ftp. You can then use SSH to connect to the remote server and execute a highly templated training command (usually only specifying a dataset, epoch, train imgsz) to train the model.

啊这 @.***

南昌大学

 

------------------ 原始邮件 ------------------ 发件人: "vietanhdev/anylabeling" @.>; 发送时间: 2023年4月21日(星期五) 晚上11:02 @.>; @.**@.>; 主题: Re: [vietanhdev/anylabeling] Load Custom Model (Issue #39)

This is a feature that I'll try to add as soon as I can, however this is already possible doing some little changes in the code. You must follow the next steps:

1) Export the model using YOLOv5 or YOLOv8 repository:

For YOLOv5: python export.py --weights yourmodel.pt --include onnx --opset 12
For YOLOv8: yolo export model=yourmodel.pt format=onnx opset=12
It is important here to set opset=12

2) Put yourmodel.onnx in the correct path

Here is an example for a path in Ubuntu. Go to: /home/youruser/anylabeling_data/, and create a folder yourmodel_name and copy there yourmodel.onnx

3) Create a yourmodel.yaml file of your model

Go to anylabeling/anylabeling/configs/auto_labeling and change the default .yaml YOLOv5 or YOLOv8 model according to your case, here is an example for YOLOv5 model: type: yolov5 name: yourmodel_name display_name: yourmodel_name model_path: https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.0.1/yolov5l.onnx input_width: 640 input_height: 640 score_threshold: 0.5 nms_threshold: 0.45 confidence_threshold: 0.45 classes: - your_class1 - your_class2 - ...
The value model_path in your .yaml file doesn't matter because you already copied in the download folder (anylabeling_data).

4) Change model.yaml file so yourmodel_name will be listed:

Go to anylabeling/anylabeling/configs/auto_labeling and open models.yaml file and add your model at the end of the file: ... - model_name: "yourmodel_name" config_file: "yourmodel.yaml"
It is important here that you keep in mind the same values for yourmodel_name and yourmodel.yaml

Let me know if you need some extra help.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

ComputerVisionFans commented 1 year ago

Here, As discussed with @vietanhdev we are trying to load the Orginal SAM VIT H model without (Quant version), I tried the easilest way of by just replacing the encoder.onxx, and decoder.onxx in _anylabelingdata/. However, due to the SAM VIT-H model has also a additional Encoder Bin file which is 2.5 G. encoder-data.bin so the load is failed, it still trying to download the Decoder and Encoder file of SAM VIT-H Quant version. image So do you plan to also integrating with this Orginal SAM VIT H model, the lagest model? Also really looking forward to see the function of maybe loading customrized model in pth form as well! for example, FRCNN model etc. Then the user can integrated their own model for faster and better labeling!

vietanhdev commented 1 year ago

From AnyLabeling v0.2.22, to load custom models:

MACLKC commented 1 year ago

@vietanhdev Can I ask a question? I can successfully load my models but I want to add Group ID. how should I do??

KroitAax commented 1 year ago

Here, As discussed with @vietanhdev we are trying to load the Orginal SAM VIT H model without (Quant version), I tried the easilest way of by just replacing the encoder.onxx, and decoder.onxx in _anylabelingdata/. However, due to the SAM VIT-H model has also a additional Encoder Bin file which is 2.5 G. encoder-data.bin so the load is failed, it still trying to download the Decoder and Encoder file of SAM VIT-H Quant version. image So do you plan to also integrating with this Orginal SAM VIT H model, the lagest model? Also really looking forward to see the function of maybe loading customrized model in pth form as well! for example, FRCNN model etc. Then the user can integrated their own model for faster and better labeling!

Did you successfully load the model in the end? I have currently downloaded the latest version of the annotation tool, but I am not sure how to integrate SAM open-source vit_ h. Can you help me convert the PTH model into a tool loadable ONNX model?

vietanhdev commented 1 year ago

@KroitAax Check this code for converting and loading model separately: https://github.com/vietanhdev/samexporter I will integrate it into the tool next week.

Kk875 commented 1 year ago

I followed the instructions and loaded my model successfully,it works very well,Thanks! But we can go further,We can design an interface to guide users to load their own models and then automatically generate corresponding yamls. In addition, I have an idea about the integrated model training process. Usually we train the model on a remote server. Take yolov5 for example, its training process is relatively fixed. We can package the labeled datasets and upload them to the server as ftp. You can then use SSH to connect to the remote server and execute a highly templated training command (usually only specifying a dataset, epoch, train imgsz) to train the model. 啊这 @. 南昌大学   ------------------ 原始邮件 ------------------ 发件人: "vietanhdev/anylabeling" @.>; 发送时间: 2023年4月21日(星期五) 晚上11:02 @.>; @*.**@*.>; 主题: Re: [vietanhdev/anylabeling] Load Custom Model (Issue #39) This is a feature that I'll try to add as soon as I can, however this is already possible doing some little changes in the code. You must follow the next steps: 1) Export the model using YOLOv5 or YOLOv8 repository: For YOLOv5: python export.py --weights yourmodel.pt --include onnx --opset 12 For YOLOv8: yolo export model=yourmodel.pt format=onnx opset=12 It is important here to set opset=12 2) Put yourmodel.onnx in the correct path Here is an example for a path in Ubuntu. Go to: /home/youruser/anylabeling_data/, and create a folder yourmodel_name and copy there yourmodel.onnx 3) Create a yourmodel.yaml file of your model Go to anylabeling/anylabeling/configs/auto_labeling and change the default .yaml YOLOv5 or YOLOv8 model according to your case, here is an example for YOLOv5 model: type: yolov5 name: yourmodel_name display_name: yourmodel_name model_path: https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.0.1/yolov5l.onnx input_width: 640 input_height: 640 score_threshold: 0.5 nms_threshold: 0.45 confidence_threshold: 0.45 classes: - your_class1 - your_class2 - ... The value model_path in your .yaml file doesn't matter because you already copied in the download folder (anylabeling_data). 4) Change model.yaml file so yourmodel_name will be listed: Go to anylabeling/anylabeling/configs/auto_labeling and open models.yaml file and add your model at the end of the file: ... - model_name: "yourmodel_name" config_file: "yourmodel.yaml" It is important here that you keep in mind the same values for yourmodel_name and yourmodel.yaml Let me know if you need some extra help. — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.>

hi,I encountered a problem. I did not follow the steps answered in this issue to start importing my model,because this repo has updated now. But,After the model was imported, the semi-automatic labeling effect was very poor. Can you give me some advice?thank you

ryouchinsa commented 1 year ago

Thanks for your great tool AnyLabeling.

RectLabel is an offline image annotation tool for object detection and segmentation. Although this is not an open source program, you can label polygons using Segment Anything models and you can use your custom YOLOv5/v8 models for auto labeling polygons.

sam_polygon

hdnh2006 commented 1 year ago

Thanks for your great tool AnyLabeling.

RectLabel is an offline image annotation tool for object detection and segmentation. Although this is not an open source program, you can label polygons using Segment Anything models and you can use your custom YOLOv5/v8 models for auto labeling polygons.

sam_polygon

Sorry, but I think you should avoid advertising in this platform. Please, let's keep this repo clean of SPAM.

ryouchinsa commented 1 year ago

We are sorry that we made you uncomfortable. We are not going to be a spam. As a developer of image annotation tools, we can support around loading YOLOv5/v8/SAM models and processing images. Our main purpose is not advertising our product.

Abonia1 commented 6 months ago

Hello @hdnh2006 I have followed the steps you mentionned.

I have convert the custom model to onnx and add it in the /home/youruser/anylabeling_data/yolov8m-custom-finetune/yolov8m-custom-finetune.onnx

But in the anylabeling/anylabeling/configs/auto_labeling we see only the models.yaml I have added:

- name: "yolov8m-custom-finetune"
  display_name: yolov8m-custom-finetune
  download_url: https://github.com/vietanhdev/anylabeling-assets/releases/download/v0.4.0/yolov8m-custom-finetune.zip 

Added config.yaml with onnx path in the same directory as onnx model -> /home/youruser/anylabeling_data/yolov8m-custom-finetune/config.yaml

and error below in terminal:

[ERROR:0@119.673] global net_impl.cpp:1169 getLayerShapesRecursively OPENCV/DNN: [Reshape]:(onnx_node!/model.22/dfl/Reshape): getMemoryShapes() throws exception. inputs=1 outputs=1/1 blobs=0
[ERROR:0@119.673] global net_impl.cpp:1172 getLayerShapesRecursively     input[0] = [ 1 64 14742 ]
[ERROR:0@119.673] global net_impl.cpp:1176 getLayerShapesRecursively     output[0] = [ ]
[ERROR:0@119.673] global net_impl.cpp:1182 getLayerShapesRecursively Exception message: OpenCV(4.9.0) /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/layers/reshape_layer.cpp:109: error: (-215:Assertion failed) total(srcShape, srcRange.start, srcRange.end) == maskTotal in function 'computeShapeByReshapeMask'

Error in predict_shapes: OpenCV(4.9.0) /Users/runner/work/opencv-python/opencv-python/opencv/modules/dnn/src/layers/reshape_layer.cpp:109: error: (-215:Assertion failed) total(srcShape, srcRange.start, srcRange.end) == maskTotal in function 'computeShapeByReshapeMask'

Thanks for your help.

raviakash commented 3 weeks ago

Hello guys

even I am also facing the same predict_shapes error issue after loading a custom trained model on custom dataset and the following error occurred in the terminal

[ERROR:0@39.650] global net_impl.cpp:1161 getLayerShapesRecursively OPENCV/DNN: [NaryEltwise]:(onnx_node!/fpn_phase_1/Add_1): getMemoryShapes() throws exception. inputs=2 outputs=0/1 blobs=0
[ERROR:0@39.650] global net_impl.cpp:1167 getLayerShapesRecursively     input[0] = [ 1 256 35 35 ]
[ERROR:0@39.650] global net_impl.cpp:1167 getLayerShapesRecursively     input[1] = [ 1 256 40 40 ]
[ERROR:0@39.650] global net_impl.cpp:1177 getLayerShapesRecursively Exception message: OpenCV(4.7.0) /io/opencv/modules/dnn/src/layers/nary_eltwise_layers.cpp:139: error: (-215:Assertion failed) shape[i] == 1 || outShape[i] == 1 in function 'findCommonShape'

Error in predict_shapes: OpenCV(4.7.0) /io/opencv/modules/dnn/src/layers/nary_eltwise_layers.cpp:139: error: (-215:Assertion failed) shape[i] == 1 || outShape[i] == 1 in function 'findCommonShape'