Open chriswernette opened 5 years ago
Would someone please elaborate on #3? Do we consider eye movement and the direction the camera is facing as the same? If not, how do we measure eye movement?
I'm more than happy to work on any aspect, with slight preference for 2 or 3.
@pmpaquet I should have elaborated more. Professor Corso said for eye movement we just treat it as a black box that we are given by some other CV system. So we would be handed coordinates of where the user is looking. I think the main task would be to take the user's eye locations and then have some screening to make sure it's a valid location. I.e. if the user is looking at the dash instead of the windshield, then it bounds the x,y coords to the nearest point on the windshield. Also, this part of the project would be responsible for setting up example eye location vectors we could use, and interfacing with the homography team to make sure the output of this module is consistent, i.e. always corresponds to the center of the image, always corresponds to the upper left hand corner, however you set it up. I think we can fluff this up a bunch, maybe we make it have some sort of simple "smoothing" algorithm to impress him, because I think eye movements can be really quick and jittery in real life. Maybe we could discuss how they implement it in industry in our paper/poster.
I see a few main modules that need to be written:
Extra minor modules/work that needs to take place:
I really am interested in problem 1. and would like to work on a Hough transform/Canary edge method to implement this. I'm also open to a segmentation/deep learning method, maybe we could try to implement that second and then benchmark? I feel like the Hough transform method will be relatively quick to get up and running and test but might not have the best results.