Closed thomasfermi closed 3 years ago
Ok, I though about this for a while an I came up with another design.
The CameraGeometry
and LaneDetector
classes from the previous exercises stay as they are. In this chapter we implement a class CalibratedLaneDetector
which inherits from LaneDetector. I will develop a suggestion for that. Then it can be discussed.
I came up with this initial design: CalibratedLaneDetector
Maybe this can replace the CameraCalibrator. This is just the solution code and we would need to think about what will be implemented by the students.
There is also a minimal test notebook but here there is quite some work still needed. Probably the test notebook should loop over the images in a video and feed them to the CalibratedLaneDetector. This should then determine pitch and yaw and average them over time.
@MankaranSingh What do you think of this approach?
Update:
imageio
based code shall remain, the environment.yml
needs to add pip install imageio-ffmpeg
the residual method looks good! we would need to add this method in hints section. currently, it only contains curvature based method. what do you think, should we remove the curvature method from chapter or add it's code also in the solution code ?
and also, for video reading why not use cv2.videoCapture ? it would save us from adding new dependency.
Hi @MankaranSingh ,
what do you think, should we remove the curvature method from chapter or add it's code also in the solution code ?
I think I would remove it for now.
and also, for video reading why not use cv2.videoCapture ? it would save us from adding new dependency.
You are right, I was not using opencv because it was a bit cumbersome, but I finally got it running now. I changed the test notebook
Remaining work to close the issue:
code/solutions/camera_calibration/camera_calibrator.py
since calibrated_lane_detector.py
contains all the stuff that is neededcode/solutions/camera_calibration/camera_calibrator.py
The CameraCalibrator shall get doc strings and #TODO comments to help the student with the implementation.
We should add functions
show_vanishing_point(self, image, mpl_axis)
which determines the vp from the image and then writes a plot to the mpl_axis object. This way we can reduce the boiler plate code in the book chapter itselfget_vanishing_point
shall be renamed toget_intersection
. It should include a check form1 == m2
to avoid division by zero. There shall be a new functionget_vanishing_point(self, image)
that returns u_i, v_iget_py_from_vp(self, u_i, v_i, K)
can live without the "K" argument. It can just useK = self.ld.cg.intrinsic_matrix
or even directlyKinv = self.ld.cg.inverse_intrinsic_matrix
.EDIT: Regarding the last point. Maybe giving K as an argument is not such a bad idea after all... Now that the CameraCalibrator is added I feel that the design of the relations between CameraGeometry, LaneDetector, and CameraCalibrator are not that nice. Maybe the LaneDetector should not have a reference to the CameraGeometry, but rather have it passed as a function argument when needed. I will think about this a bit....