The project can achieve FCWS, LDWS, and LKAS functions solely using only visual sensors. using YOLOv5 / YOLOv5-lite / YOLOv6 / YOLOv7 / YOLOv8 / YOLOv9 / EfficientDet and Ultra-Fast-Lane-Detection-v2 .
Hello!I‘m so sorry to bother you. I have an inssue in the project.
If the model not have train code. How the model detect video clips of datasets with different field of view characteristics efficiently ? For example, the model can efficiently detect the lane of this videos, but I want to use the dataset have the different lane parameter.
If the model have the train code , Could you tell me where are they?(and the test code also tell me)
How to evaluate model accuracy and detection efficiency?
Hello!I‘m so sorry to bother you. I have an inssue in the project. If the model not have train code. How the model detect video clips of datasets with different field of view characteristics efficiently ? For example, the model can efficiently detect the lane of this videos, but I want to use the dataset have the different lane parameter. If the model have the train code , Could you tell me where are they?(and the test code also tell me) How to evaluate model accuracy and detection efficiency?
Thank you very much!