cfzd / Ultra-Fast-Lane-Detection

Ultra Fast Structure-aware Deep Lane Detection (ECCV 2020)
MIT License
1.82k stars 493 forks source link

converted Tflite and coreml files are very large #155

Closed rock2021 closed 3 years ago

rock2021 commented 3 years ago

Hi @cfzd,

thanks for the wonderful work

My objective was to run lane detection on edge devices like android and ios with very low latency. And your idea of Ultra-Fast-Lane-Detection fits very well than any other segmentation approach. From the previous, issues124 I was able to change the backbone to mobilenetv2 and even change the input shape successfully. the model works well.

Now when converting the model to edge I followed the following pipeline :

Tensorflow => pth-> onnx-> keras -> tflite
Coreml => pth -> onnx ->coreml

libraries used: onnx2keras,

Few questions on the edge conversion side :

Note: Have not used any tflite optimizer yet

attached the respective files

cfzd commented 3 years ago

@rock2021 The model is saved using this function: https://github.com/cfzd/Ultra-Fast-Lane-Detection/blob/60f477c7358bbe177e1117b9f229a4a4b0db0e73/utils/common.py#L63-L70 So the state dict contains optimizers, and it would be large if you use optimizers like Adam. Besides, the auxiliary segmentation branch is not needed during inference, you can also remove this part: https://github.com/cfzd/Ultra-Fast-Lane-Detection/blob/60f477c7358bbe177e1117b9f229a4a4b0db0e73/model/model.py#L34-L58

At last, if the model is still large, it is possible, because we use an MLP to output the coordinates of lanes. The MLP is relatively large compared with the backbone.

rock2021 commented 3 years ago

@cfzd thanks for the reply

1) state dict contains optimizers :

2) auxiliary segmentation branch : I have not used the auxiliary segmentation branch as while loading the model i used " use_aux=False" net = parsingNet(pretrained = False, backbone=cfg.backbone,cls_dim = (cfg.griding_num+1,cls_num_per_lane, cfg.num_lanes), use_aux=False)

3) MLP to output the coordinates of lanes: Do you recommend anything else to reduce the size

kir486680 commented 3 years ago

@rock2021 Hi! how do you load and make prediction with your coreMl model? I keep getting this error failed assertionTexture Descriptor Validation`

rock2021 commented 3 years ago

@kir486680 that's an issue I found... to resolve it

Train and save the model without an optimizer dict : state = {'model': model_state_dict}

and then convert the model to coreml ..that should work

kir486680 commented 3 years ago

@rock2021 Thanks for suggestions. I saw that you provided a link to google drive with models. Are the model on that page properly trained?

rock2021 commented 3 years ago

@rock2021 Thanks for suggestions. I saw that you provided a link to google drive with models. Are the model on that page properly trained?

NO .. its not. You may have to train a separate model with these changes. The model in google drive are trained with the optimizer state dict.

kir486680 commented 3 years ago

@rock2021 Could you please share your properly trained models? Unfortunately I do not have a lot of space on my computer for training data))

rodrigoGA commented 3 years ago

Hello @rock2021 , @kir486680

I'm going to work on this too. Did you get good results with the converted model? Could you share the model? I also found this project that contains the rewritten model in tf2, but it needs a bit of work.

ibaiGorordo commented 3 years ago

Sorry to comment in a closed issue. But for future reference, here it is a repository with the models converted to different frameworks (.tflite, .onnx, .mlmodel...): https://github.com/PINTO0309/PINTO_model_zoo/tree/main/140_Ultra-Fast-Lane-Detection

Particularly, for Tensorflow Lite inference I have uploaded a repository with different inference scripts: https://github.com/ibaiGorordo/TfLite-Ultra-Fast-Lane-Detection-Inference

TFlite ultra fast lane detection