vardanagarwal / Proctoring-AI

Creating a software for automatic monitoring in online proctoring
MIT License
544 stars 329 forks source link

shape_68.dat missing #23

Closed ctippur closed 3 years ago

ctippur commented 3 years ago

Nice work. I was trying to run the sample code. I seem to be missing shape_68.dat file. I see that you have tried to add it but I dont see the file.

vardanagarwal commented 3 years ago

Thanks @ctippur. I switched from using Dlib facial landmark model to a TensorFlow model which I am pretty sure you can find in the repo. However, if you want to continue to use Dlib you can download the model from here: https://github.com/davisking/dlib-models/blob/master/shape_predictor_68_face_landmarks.dat.bz2.

Hope that solves the issue.

ctippur commented 3 years ago

Thanks @vardanagarwal. I am looking for the most accurate method to determine the location points. If I understand right, TF model is a better way to go?

vardanagarwal commented 3 years ago

Yeah, based on my testing I felt is was better. You can have a look at the results here: https://towardsdatascience.com/robust-facial-landmarks-for-occluded-angled-faces-925e465cbf2e?sk=505eb1101576227f4c38474092dd4c22

ctippur commented 3 years ago

Vardan,

Thank you. Thats quite impressive. I modified contouring function to see if I can get just the center of pupil coordinates.

I seem to be getting a lot of (<class 'ValueError'>, ValueError('max() arg is an empty sequence'), <traceback object at 0x148fe4b90>) I would also like to improve the precision. Any ideas on how I can improve the precision and not lose frames?

try:
        cnt = max(cnts, key = cv2.contourArea)
        M = cv2.moments(cnt)
        cx = int(M['m10']/M['m00'])
        cy = int(M['m01']/M['m00'])

        if right:
            cx += mid

        return eye, cx, cy
    except:
        print (sys.exc_info())
        return eye, None, None
vardanagarwal commented 3 years ago

That's to do with the threshold. If your threshold is very small then no contour will be detected leading to that error. You can add an if condition checking that length of cnts>0.

I am trying to figure out to automate the thresholding part then this won't be an issue.

Can you elaborate what you mean by precision here and why you feel it might lose frames if it is tried to be improved.

ctippur commented 3 years ago

You are right. The threshold for the most part seem to be 0. I will have to look at the precision question again. I have a still video and I was hoping to see the cx and cy to pretty much be the same. Since so many frames are being rejected.

To increase the threshold, I am looking at some variatiobs below. Let me know if we can collaborate.

    thresh = cv2.erode(thresh, None, iterations=2) 
    thresh = cv2.dilate(thresh, None, iterations=4) 
    thresh = cv2.medianBlur(thresh, 3) 
    thresh = cv2.bitwise_not(thresh)
vardanagarwal commented 3 years ago

Yeah sure! As mentioned earlier it is currently controlled by a trackbar which means it requires manual work so I am definitely up for it.

ctippur commented 3 years ago

Quick benchmarking

| Threshold   |      Left x      |  left y | right x | right y |
|----------|:-------------:|------:|------:|------:|
| 1 |  180 | 180| 199 | 199 |
|3|165|165|180|180|
vardanagarwal commented 3 years ago

It would be better to close this issue and open up another one having the apt name so that anyone else looking to contribute can find it.

ctippur commented 3 years ago

Perfect. Closing this to reopen another issue.