GREAT-WHU / RoadLib

A lightweight library for instance-level visual road marking extraction, parameterization, mapping, etc.
GNU General Public License v3.0
136 stars 33 forks source link

How to train your own model? #7

Closed lxy-mini closed 1 month ago

lxy-mini commented 1 month ago

Hello, thank you very much. I can deploy the network on my terminal and infer successfully through the weight model you gave me. But it takes 0.7s to infer a picture! This obviously does not meet the real-time performance of SLAM. Are you using the weight parameters obtained through segformer-B5 training? Have you tried segformer-B0? Thank you again!

yuxuanzhou97 commented 1 month ago

0.7s is too slow. I use the segformer-B2, and it can achieve 10~20Hz without TensorRT on my laptop (RTX 4080). I believe there's something wrong that causes the huge latency, maybe the visualization?

BTW, there could be a huge room to optimize regarding the segmentation. I used to use CNN-based (rather than transformer) method for this task and it works fine. In this project, I use the segformer-B2 just randomly.

lxy-mini commented 1 month ago

Hello, today I found that changing the parameters of the inference function with_labels=False and show=False can significantly improve the inference speed. Currently, I can get 5.33img/s. Is there any better way to improve the speed during the inference process? ?

yuxuanzhou97 commented 1 month ago

I'm not very sure about this. Try using "return None" at the beginning of the visualize() function in mmseg/apis/mmseg_inferencer.py. In all, I believe the segmentation could be effciently done with some deployment efforts, like TensorRT. Do you need the training dataset?

lxy-mini commented 1 month ago

Ok thank you, yes I am going to train some data sets next, such as apollo scape. But I lack experience in this area

yuxuanzhou97 commented 1 month ago

OK. If you need the toy dataset that I used, e-mail me (yuxuanzhou@whu.edu.cn) and I'll send it to you.

lxy-mini commented 1 month ago

Thank you very much and wish you all the best

lxy-mini commented 1 month ago

在训练ApolloScapes数据集过程中,我直接使用了从网站上提供的彩色图片和标签。训练时一直报错,错误原因是:标签的数值超过了模型输出的类别数量。我想询问您在训练过程中,是否将标签格式进行转换?我使用的标签如下所示:

170927_063811892_Camera_5_bin

yuxuanzhou97 commented 1 month ago

Yes, I have converted the label format provided by Apolloscape to the following: {0: no category, 1: solid line, 2: dashed line, 3: indication markings (arrows, etc.), 4: zebra crossing, 5: stop line, 255: invalid}. You can refer to the example (which includes both Apolloscape and our self-made datasets) I sent you in the email.

lxy-mini commented 1 month ago

OK, thanks. I have seen the dataset file you sent in your email. Can you refer to the source code of your data conversion? Because my label file is not processed properly, the training has been unable to proceed. But I can train normally using your data set.

yuxuanzhou97 commented 1 month ago

It's on my another PC. I'll look for it later today.

lxy-mini commented 1 month ago

OK,thanks a lot!!!!!!!

yuxuanzhou97 commented 1 month ago

apolloscape_transform.cpp.txt Sorry for the latency. The script is like this.

lxy-mini commented 1 month ago

Thank you again!