In this Advanced Lane Detection project, we apply computer vision techniques to augment video output with a detected road lane, road radius curvature and road centre offset. The video was supplied by Udacity and captured using the middle camera.
The goals / steps of this project are the following:
you need to install Python 3 beside with the numpy, opencv and matplotlib packages
please install following dependecies:
installing anaconda(python3 version)
if you have install anaconda for python 2.7 you should have install new environment in that version
conda create --name=yourNewEnvironment python=3 anaconda
source activate [environmentname]
after installing check the Cv2 package with this:
python
import cv2
for using video package please install moviepy
Advanced Lane Finding Project
The goals / steps of this project were the following:
Abstract—In this project we leverage we present application to lane detection of the road in the urban streets based on the some image processing techniques, we will evaluate the performance of method in different type of the datasets such as straight and curve road in different weather condition. we proposed the results of this application for most of the image datasets in the experiment and result setction.
Image distoration occures when cameras look image at objects that have 3D feature in the real-world and want totransform them into 2D plane in the image. Actually this transform can not be effecient and bring a lot of distoration in the image, this distoration changes the shape and size of the object in this transformation. So, the fisrst steps in an- alyzing the images distoration level as pre-processing step. actually this step has significant effect on the performance of lane detection, because distorted images show the lane of the image in some parts curvy and keep it out the the detection application from reality and also makes an image look tilted so that some objects appear farther away or closer than they actually are in reality. Camera Calibration maps 3D points 2D points one. This can be done with the imagesthat has grid pattern such as a chessboard. This process uses multiple images of chessboards from different distance and angle, the images are putted as input and return the camera calibration values and we use this value matrix withe the original images of datasets to make them unsorted image. Before starting the implementation of the lane detection pipeline, the first thing that should be done is camera calibration. That would help us:
For this project, the calibration pipeline is implemented in function calibrate
implemented in file calibrate.py
. The procedure follows these steps:
The procedures start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. Here I am assuming the chessboard is fixed on the (x, y) plane at z=0. Also, I am assuming that the size of the chessboard pattern is the same for all images, so the object points will be the same for each calibration image.
Then, the calibration images are sequentially loaded, converted to grayscale and chessboard pattern is looked for using cv2.findChessboardCorners
. When the pattern is found, the positions of the corners get refined further to sub-pixel accuracy using cv2.cornerSubPix
. That improves the accuracy of the calibration. Corner coordinates get appended to the list containing all the image points, while prepared "object points" gets appended to the list containing all the object points.
The distortion coefficients and camera matrix are computed using cv2.calibrateCamera()
function, where image and object points are passed as inputs. It is important to check if the result is satisfactory, since calibration is a nonlinear numerical procedure, so it might yield suboptimal results. To do so calibration images are read once again and undistortion is applied to them. The undistorted images are shown in order to visually check if the distortion has been corrected. Once the data has been checked the parameters are is pickled saved to file. One sample of the input image, image with chessboard corners and the undistorted image is shown:
Original image | Chessboard corners | Undistorted image |
---|---|---|
Before we move further on, lets just reflect on what the camera matrix is. The camera matrix encompases the pinhole camera model in it. It gives the relationship between the coordinates of the points relative to the camera in 3D space and position of that point on the image in pixels. If X, Y and Z are coordinates of the point in 3D space, its position on image (u and v) in pixels is calculated using:
where M is camera matrix and s is scalar different from zero. This equation will be used later on.
Pinhole camera model |
---|
Next step is to find the projective transform so the original images can be warped so that it looks like the camera is placed directly above the road. OOne approach is to hand tune the source and destination points, which are required to compute the transformation matrix. On the other hand, the script that does that for us can be created based on linear perspective geometry. Let's look at the perspective geometry on the renaissance painting "Architectural Veduta" by Italian painter Francesco di Giorgio Martini. It is easy to note that all lines meet at a single point called vanishing point. The second thing to note is that the square floor tiles centered horizontally in the image, appear as trapezoids with horizontal top and bottom edges and side edges radiating from the vanishing point.
Architectural Veduta |
---|
Our goal is to achieve exactly opposite, to transform a trapezoidal patch of the road in front of the car to a rectangular image of the road. To do so, the trapezoid needs to be defined as previously noted, horizontal top and bottom centered with respect to a vanishing point, sides radiating from the vanishing point. Of course, to define that, the first task is to find the vanishing point.
The vanishing point is the place where all parallel lines meet, so to find it we will be using images with straight lines straight_lines1.jpg
, straight_lines2.jpg
. First, the images are undistorted, the Canny filter is applied and most prominent lines are identified using cv2.HoughLinesP
. These images show how the pipeline works:
Undistorted image | Edges | Image with lines |
---|---|---|
All detected lines are added to a list. The vanishing point is at the intersection of all the lines from the list. Unfortunately, when more than two lines are present, the unique intersecting point might not exist. To overcome that the vanishing point is selected as the point whose total squared distance from all the lines is minimal, thus optimization procedure will be employed. Each line found by the Hough lines can be represented by the point on it pi and unit normal to it ni. Coordinate of the vanishing point is vp. So the total squared distance (and a cost function to be minimized is):
To find the minimum the cost function is differentiated with respect to the vp. After some derivation the following is obtained:
Once the vanishing point is found, the top and bottom are defined manually and the trapezoid edges can be calculated. The corners of the trapezoid are used as a source points, while destination points are four corners of the new image. The size of the warped image is defined in file settings.py
. After that, the matrix that defines the perspective transform is calculated using cv2.getPerspectiveTransform()
. The procedure that implements the calculation of homography matrix of the perspective transform is implemented at the beginning of the python script find_perspective_transform.py
(lines 9 - 67). Images that illustrate the procedure follow. Please note that bottom points of the trapezoid are outside of the image, what is the reason for black triangles shown in the warped image.
Finding VP with multiple lines | Trapezoid and vanishing point | Warped image |
---|---|---|
The obtained source and destination points are:
Source | Destination |
---|---|
375, 480 | 0, 0 |
905, 480 | 500, 0 |
1811, 685 | 500, 600 |
-531, 685 | 0, 600 |
The selected range is quite big, but that had to be done in order to be able to find the lanes on the harder challenge video. In that video, the bends are much sharper than on the highway and might easily veer outside of the trapezoid causing the whole pipeline to fail.
Once again, lets just reflect on what is the matrix returned by the cv2.getPerspectiveTransform()
. It tells how the perspective transformation is going to be performed and where the pixel from original image with the coordinates (u, v) is going to move after the transformation. The destination of that pixel on the warped image would be the point with the coordinates (uw, vw). The new position is calculated using::
where H is homography matrix returned as a result of cv2.getPerspectiveTransform()
and s is scalar different from zero.
Next important step is to estimate resolution in pixels per meter of the warped image. It also can be done by hand, but as previously we'll create a script that does that for us. In the course materials, it was stated that width of the lane is no less than 12 feet. In order to estimate the resolution in pixels per meter, the images with the straight lines will be used. They will be unwarped and the distance between the lines will be measured. The lower of two distances will be assumed to be 12 feet or 3.6576 meters.
To start, the images with the straight lines would be unwarped and color would be converted to HLS space. To find the lanes the threshold would be applied to the luminesence component. Also, only some region of interest is taken into account. Since lines were straight heading towards the vanishing point, after the warping they will be vertical. The centroids of the blobs on left and right images would be calculated using image moments and function cv2.moments()
. Since the lane lines are vertical the width of the lane in pixels is the difference between the x coordinates of two centroids. That allows for the calculation of the width in pixels and then resolution in x direction. This procedure is implemented between line 71 and
91 of the script find_perspective_transform.py
. The images that illustrate the procedure are shown below.
Warped image with parallel lane lines | Thresholded luminesence | Lane with lines identified |
---|---|---|
That is how to find resolution in x direction, but for finding resolution in y there is no such trick. Since nothing was estimated manually neither this will be. The camera matrix obtained by calibrations holds some relative information about resolutions in x and y direction. We can try to exploit that. To find resolution in y direction we have to do some more math.
Lets say, we have a coordinate frame attached to the road, as shown on image below. It is easy to see that transformation of the coordinates represented in the coordinate frame of the road to the coordinate frame of the warped image, consists of scaling and shifted. Scale in x direction corresponds to the number of pixel per meter in x direction. Same holds for y. In mathemathical form that can be written as:
Road frame as seen by camera | Road frame on warped image |
---|---|
The same thing can be calculated from the other side. Lets say that position and of the road coordinate frame in camera coordinate frame is given with rotation matrix R=[r1 r2 r3] and translation vector t. One important property that is going to be exploited is that matrix R is orthogonal, meaning that each of the rows r1, r2, r3 has the length of 1. Now since we know that, the pixel on the image that corresponds to the point with coordinates Xr, Yr and Zr is calculated by:
Since the road is planar, the Zr=0. Now we apply the perspective transform and get:
Since it has to hold for every point we can conclude that:
Where h1, h2, h3 are columns of matrix (HM)-1. Since the length of vectors r1 and r2 is one, we can calculate scalar s and finally ry:
The rather simple equation is obtained at the end, but it took us a while to get there. This calculation is implemented in lines 91 and 92 of script find_perspective_transform.py
. The final result is:
pix/meter in x direction | pix/meter in y direction |
---|---|
46.567 | 33.0652548749 |
The pipeline is implemented in the class LaneFinder
that does the lane finding and it is written in the file lane_finder.py
. This class has two instances of subclass LaneLineFinder
which is used to find a single line. The parts of the pipeline that have to be performed once (masking, calculating curvature, drawing an overlay etc...) on a whole image are encapsulated in the LaneFinder
. The parts that have to be performed twice (fitting the line on a binary image), once for each line is encapsulated in LaneLineFinder
. For easier explanation first, the functionality used for the single images will be explained. The pipeline for the video is almost the same, while some additional filtering is included.
The pipeline for single images goes through several steps. Let us see how initial image looks:
The image gets undistorted first, then the perspective transformation is applied to the undistorted image. After that, the images is converted into HLS and LAB color space. L channels of both HLS and LAB channels will be used to track the bright regions of the image, while B channel is used to track the yellow lines on the image. Also, some filtering is performed to reduce the noise, cv2.medianBlur()
is used since it maintains the edges. Also for the use of the harder challenge video, the greenish areas are unmasked. Part of the code that performs this is from line 187 to 220. The undistorted and warped images are:
Undistorted original image | Warped undistorted image |
---|---|
To find the bright or yellow areas, the morphological TOP HAT operation is used. It isolates the areas brighter than their surroundings. This operation is used in order to make pipeline robust against the lighting changes. In selected case, the lightness of the road surface changes, but we'll see that it does not affect the tophat morphological operation. The edge detection not used since they are extremely affected by the noise on the image, which makes them unsuitable for harder challenges. After applying TOP HAT operation, the image is thresholded using adaptive threshold which adds a bit more to the overall robustness. This part is implemented in lines 225 to 237. The resulting images are:
Tophat on L channel from LAB | Tophat on L channel from HLS | Tophat on B channel from LAB |
---|---|---|
Tophat on L channel from LAB | Tophat on L channel from HLS | Tophat on B channel from LAB |
LaneLineFinder
Once the masks are calculated, logical or is applied between them in order to obtain the total mask. Since the threshold is kept quite low, there will be a lot of noise. To avoid noise interfering with lane finding procedure, the mask is eroded which removes all the regions smaller than the structuring element. Once that is finished the masks are passed to LaneLineFinder
which actually looks for the line in the binary image. This part is implemented from line 236 to line 248. The results are:
Total mask | Eroded mask |
---|---|
When the mask is found, the search for the line begins. The initial point to start a search is somewhere 1.82 meters (6 feet). Under the assumption that lane is 12feet wide and that the car is in its middle, we would be spot on. Since that might not hold, the search is performed in its surroundings. The window at the bottom of the image with the highest number of points included found. After that, we go one layer up and perform the same search but right now, we start from the maximum from the layer below. The search is performed until the top of the image is reached, gradually eliminating points outside of the maximal region. Function that does this is LaneLineFinder.find_lane_line()
. After the lane points have been isolated, the polynomial fit is performed using LaneLineFinder.fit_lane_line()
. In that procedures some statistics are calculated which help determine if the found line is good or not. The statistics include:
numpy.polyfit
All of these have to be above some empirically defined threshold. The maximal regions and points used for fitting are shown in the image below (red is for left, blue is for right line):
Line mask | Line points |
---|---|
If lanes are found, the curvature and position of the centre of the car is calculated using functions get_curvature()
and get_center_shift()
. Since the two lines are present, for the coefficient of the polynomial the mean value is used. The y coordinate for which the polynomials are evaluated is the bottom of the image. After that, the lines are drawn on a warped image and then unwarped and added to the original image. The last thing is to print out the values of curvature and offset of the center of the car. Results for all provided images can be found in output_images. Here is the result for discussed case:
Warped lines | Final result |
---|---|
For the videos, the pipeline follows the basic pipeline applied to single images. Additionally, because of the temporal dimension, some additional filtering is applied. Here is what is done:
The pipeline is run on all three provided videos. It works great for project_video.mp4
and challenge_video.mp4
. It works with harder_challenge_video.mp4
as well but loses the lane a few times.
Here are links to the videos:
The biggest issue by far for me were sudden changes of light conditions. In those cases, the lines get either completely lost (going from bright to dark) or image gets filled with noise coming from the white spots. Although I have done the best I can to make pipeline robust against that kind of changes, they still can cause major problems, which is evident from harder challenge video. More advanced filtering and brightness equalization techniques have to be examined.
The averaging out of polynomial coefficients over the last couple of iterations is inappropriate. Effects of it can be in harder_challenge_video where lane computed by the code lags behind the lane on the image, especially in the case of sharp bends. Some better filtering technique has to be applied.
1-Nanodegrees Programs: https://www.udacity.com/nanodegree Nanodegree Plus (job guarantee): https://www.udacity.com/nanodegree/plus UConnect (weekly in-person study sessions): https://www.udacity.com/uconnect
2-Courses on Udacity Machine Learning Engineer Nanodegree by Google (Currently Available): https://www.udacity.com/course/machine-learning-engineer-nanodegree-by-google--nd009
3-Artificial Intelligence for Robots (Free Course) https://www.udacity.com/course/artificial-intelligence-for-robotics--cs373
3-Intro to Statistics (Free Course) https://www.udacity.com/course/intro-to-statistics--st101
4-Deep Learning (Free Course) https://www.udacity.com/course/deep-learning--ud730
5-Programming Foundations with Python (Free Course) https://www.udacity.com/course/programming-foundations-with-python--ud036
6-Introduction to Computer Vision: https://www.udacity.com/course/introduction-to-computer-vision--ud810 Cool topics for self driving car course should cover: Deep Learning, Computer Vision, Vehicle Dynamics, Controllers, Localization, Mapping (SLAM), Sensors & Fusion
https://medium.com/udacity/self-driving-car-employers-f24c0013cf1d#.3jlgb1c1i
https://www.quora.com/Are-Udacity-Nanodegrees-worth-it-for-finding-a-job
http://blog.udacity.com/2015/03/udacity-nanodegree-reviews-your-questions-answered.html
http://blog.udacity.com/2015/03/udacity-nanodegree-reviews-your-questions-answered.html
http://www.bbc.com/news/technology-36952252
http://money.cnn.com/2016/04/04/technology/george-hotz-comma-ai-andreessen-horowitz/
https://www.youtube.com/watch?v=fQmOpxEvpvI
https://www.cbinsights.com/blog/autonomous-driverless-vehicles-corporations-list/
http://www.theverge.com/2016/6/6/11866868/comma-ai-george-hotz-interview-self-driving-cars
Trucking Industry: http://ot.to/
Racing Industry: https://blogs.nvidia.com/blog/2016/06/03/autonomous-vehicles/
https://www.cityscapes-dataset.com/
http://robotcar-dataset.robots.ox.ac.uk/examples/
http://selfracingcars.com/blog/2016/7/26/polysync
http://data.selfracingcars.com/
Stanford Convolutional Neural Networks for Visual Recognition http://cs231n.github.io/
Deep Learning Framework written in Swift to use on apple devices (written by @amund) http://deeplearningkit.org/
Image segmentation from comma.ai https://commaai.blogspot.de/2016/07/self-coloring-books.html?m=1
[1] Hee Lee, Gim, Friedrich Faundorfer, and Marc Pollefeys. "Motion estimation for self-driving cars with a generalized camera." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition . 2013.
[2] Levinson, Jesse, et al. "Towards fully autonomous driving: Systems and algorithms." Intelligent Vehicles Symposium (IV), 2011 IEEE . IEEE, 2011.
[3] M. Bojarski, D. Del Testa, D. Dworakowski, B. Firner, B. Flepp, P. Goyal, L. D. Jackel, M. Monfort, U. Muller, J. Zhang, et al. End to end learning for self-driving cars.arXiv preprint arXiv:1604.07316, 2016
[4] H. Xu, Y. Gao, F. Yu, and T. Darrell. End-to-end learning of driving models from large-scale video datasets. arXiv preprint arXiv:1612.01079, 2016.
[5] C. Szegedy et al., “Going deeper with convolutions,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12–June, pp. 1–9, 2015.
[6] Udacity.Public driving dataset. https://www.udacity.com/self-driving-car, 2017. [Online; accessed 07-Mar-2017].
[7] Comma.ai.Public accessed 07-Mar-2017].driving dataset.https://github.com/commaai/research,2017.[Online;
[8] Dickmanns and B. Mysliwetz, “Recursive 3-D road and relative ego- state recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 199–213, Feb. 1992.
[9] $ Car accidents. http://en.wikipedia.org/wiki/car_accident $
[10] Caltech Lanes datasets , http://www.mohamedaly.info/datasets/caltech-lanes
[11] T. Saudi, J. Hijazi, and J. Sulaiman, “Fast lane detection with randomized hough transform,” in Proc. Symp. Inf. Technol., 2008, vol. 4, pp. 1–5
[12] J. Wang, Y. Wu, Z. Liang, and Y. Xi, “Lane detection based on random hough transform on region of interesting,” in Proc. IEEE Conf. Inform. Autom., 2010, pp. 1735–1740.
A well written README file can enhance your project and portfolio. Develop your abilities to create professional README files by completing this course. this course.
2-Apply a distortion correction to raw images.
3-Use color transforms, gradients, etc., to create a thresholded binary image.
4-Apply a perspective transform to rectify binary image ("birds-eye view").
5-Detect lane pixels and fit to find the lane boundary.
6-Determine the curvature of the lane and vehicle position with respect to center.
7-Warp the detected lane boundaries back onto the original image.
8-Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.