Closed ctippur closed 6 months ago
@vardanagarwal - Let me know your thoughts on how we can proceed. I do see an accuracy issue as well. Looking at 2 use cases.
I have tried changing the
thresh = cv2.erode(thresh, None, iterations=20)
thresh = cv2.dilate(thresh, None, iterations=20)
thresh = cv2.medianBlur(thresh, 3)
As I increase the iterations, I do see that I get a lot more data points. The accuracy seem to be looked at.
I guess we can create some videos and manually annotate them first. This would make the benchmarking process much easier. Then we can find some metrics like MIOU with different processing operations to find the best one.
For the thresholding portion, the first thing I will do is to separate the thresholds for the left and right eye as if the lighting is on one side then it highly impacts it. The next thing we can do to automate the thresholds is to use some type of calibration function which would check various thresholds and find the most suitable one.
Any ideas on how to collaborate better? I am in PST.
I am sorry I don't know the full form of PST. I have added a video in the folder eye_tracker. Along with that, you can find its annotations having points for the center of the eyeballs.
Regarding collaboration, do you have any ideas with which we can move forward.
My bad. PST stands for Pacific Standard Time. I will look at the sample video you have added.
The video looks great but looks like it has too many variations. I can create a PR with a simpler video with no head tilt and just simple eye movements.
Okay, that will work.
@vardanagarwal - Please take a look at this - https://github.com/ctippur/Proctoring-AI/tree/master/eye_tracking. If it is ok, I can create a PR.
Yeah it is okay.
PR - https://github.com/vardanagarwal/Proctoring-AI/pull/26
Feel free to merge. I have added some rough benchmarking that we can try and validate.
@vardanagarwal Thanks for merging. Here is what I did so far.
cap = cv2.VideoCapture("eye_tracking/center_left_center.mp4")
2. Rotate the image after reading
frame_count=0 frame_max_count=int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) right_x=[] right_y=[] left_x=[] left_y=[] while(frame_count < 10): ret, img = cap.read()
if ret is False:
img=cv2.rotate(img, cv2.ROTATE_90_CLOCKWISE)
3. Changed contouring to return cx and cy
Observation:
Interestingly, ```countouring``` seem to be processing more frames than what's added. I am controlling the frames to restrict to just 10. I seem to be getting 20 cx's
S
Observation#2:
if I remove cv2.createTrackbar('threshold', 'image', 75, 255, nothing)
countouring
returns None.
@vardanagarwal let me know if we can go on a call or Zoom (I can set it up)
Yeah sure! We can do that. I'll explain the code to you as well of how it working at the moment.
@vardanagarwal I have tried to reach out to you on LinkedIn. Hope I reached the right person.
please upload the requirements file for this project, and noted in readme file " what python version in used "
I am unable to get this project to run perfectly. Can anybody give me the steps to run ?
Looking to improve the data points we get after processing thresholds. Looks like some frames are lost due to threshold being small.
Also, trying to see if we can improve on accuracy. Not sure if this is a issue. Would be good to benchmark the outcome.