Closed Blackhorse1988 closed 5 years ago
coutours[-1]
is the contour of the eye frame, it can't work if you want the pupil position. You can read this to understand why.
It seems that the auto-calibration gives you a wrong threshold value for the binarization. That's strange. We need to dig in. Get back to coutours[-2]
, add this in your example.py
and send me screenshots of what you get:
try:
cv2.imshow('Binarized left eye', gaze.eye_left.pupil.iris_frame)
cv2.imshow('Binarized right eye', gaze.eye_right.pupil.iris_frame)
except Exception as e:
print(e)
I guess I edited my previous message after you run the test. Add the code I gave you, and you should see two new frames on your screen. Take a screenshot and send it to me
I can't see your picture if you add new replies to this thread by email. Please, go to Github. :)
Yes, the threshold value given by the auto-calibration system is definitely wrong with you.
Before the while True:
loop, add that to your example.py
file:
results_displayed = False
And then, in the loop, add that:
if gaze.calibration.is_complete() and not results_displayed:
results_displayed = True
print(gaze.calibration.thresholds_left)
print(gaze.calibration.thresholds_right)
try:
cv2.imshow('Right eye', gaze.eye_right.frame)
cv2.imshow('Left eye', gaze.eye_left.frame)
cv2.imshow('Binarized left eye', gaze.eye_left.pupil.iris_frame)
cv2.imshow('Binarized right eye', gaze.eye_right.pupil.iris_frame)
except Exception as e:
print(e)
Keep looking at the center for 5 seconds when you start the program. Then, send me a screenshot containing these 4 frames and also what you get in your terminal.
Thanks, but I also need a screenshot containing the 4 frames you get.
Hey :) Did you found anything?
Is that normal?
Now it works! I don´t understand!
Ok, seems there is a problem with the threshold. Everytime I change the light setup behind my camera it works or not.
Is the code working also in dark?
Hi @Blackhorse1988 Your video is really impressive, congrats! Do you still need help with the auto-calibration system? If it's not totally accurate with you or your environment, you can send me video samples of you looking in different positions and I will use it to improve the algorithm.
Hey :) its your impressive code :) I changed the threshold manually to the best value, so an autocalibration would be great. I will do that tomorrow and really thank you for you time and support!
I am working with that robotic arm in my job and I would like to do that with the glass from pupil labs too.
Do you have experience to make the code working for android?
You did well to pass your own threshold. What value did you choose?
With every people I tried, the auto-calibration system was working pretty well. Too bad it's not the same for you. According to your screenshots, the program had calculated consistent values for which it obtained good iris sizes. So, you shouldn't get binarized frames with only white pixels. In worst cases, you should have a wrong iris binarization, but with some black pixels. I'm a little bit surprised and I would like to understand why, so I will keep you in touch after having done tests with your samples.
OpenCV works on Android, so it seems to me that it would be possible to run a version of this code on a mobile, with Java/Kotlin, or maybe with tools that make possible to run Python code.
About Pupil Labs, I don't know well, I've never tried it.
Hi Antoine,
I chosed around 100 and in the evening 70 and it worked well. Thank you very much, I would like to progress with your great code and if you want you can help us as supplier ofrbotic arms to disabled people. This is the background of all that.
I have created an application and I would like to run this on android. Do you have experience in that programming? As I said I am a rookie in python, opencv etc. and more an allrounder. I would like to combine an eye control like that with my application because then I don´t need an extra interface and i can use my developed bluetooth module to transmit the data to the microcontroller. I also would pay a value if you could help, we can talk about.
Best regards
Peter, Schieder Staatl. gepr. Medizintechniker, Assistive Technologies, EMEA
0160-98051275 (Mobil)pschieder@kinovarobotics.de (email) Kinova Europe GmbH
Friedrich-Ebert-Allee 13 53113 Bonn / Deutschland
kinovarobotics.com http://www.kinovarobotics.com/ Facebook http://facebook.com/kinovarobotics/ Twitter http://twitter.com/kinovarobotics LinkedIn https://www.linkedin.com/company/kinova/ YouTube https://www.youtube.com/user/Kinovarobotics https://www.youtube.com/user/KinovaroboticsGeschäftsführer: Tommy Swigart, Peter Fröhlingsdorf; HRB 229813 Amtsgericht München
Diese E-Mail und eventuelle Anhänge sind nur für die genannten Empfänger bestimmt und können vertrauliche Informationen enthalten. Wenn Sie nicht der richtige Empfänger sind, unterlassen Sie bitte das Lesen, Kopieren, die Benutzung oder die Weitergabe dieser Informationen an Dritte. Bitte verständigen Sie den Absender durch Rückantwort oder telefonisch unter +49-(0)-228 / 9293-9148 über den irrtümlichen Erhalt dieser E-Mail. Löschen Sie bitte anschließend die E-Mail und hiervon gegebenenfalls existierende Kopien. Diese Informationen können dem Datenschutz unterliegen oder anderweitig geschützt sein. Vielen Dank!
Am Mi., 3. Apr. 2019 um 01:14 Uhr schrieb antoinelame < notifications@github.com>:
You did well to pass your own threshold. What value did you choose?
With every people I tried, the auto-calibration system was working pretty well. Too bad it's not the same for you. According to your screenshots, the program had calculated consistent values for which it obtained good iris sizes. So, you shouldn't get binarized frames with only white pixels. In worst cases, you should have a wrong iris binarization, but with some black pixels. I'm a little bit surprised and I would like to understand why, so I will keep you in touch after having done tests with your samples.
OpenCV works on Android, so it seems to me that it would be possible to run a version of this code on a mobile, with Java/Kotlin, or maybe with tools that make possible to run Python code.
About Pupil Labs, I don't know well, I've never tried it.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/antoinelame/GazeTracking/issues/12#issuecomment-479248617, or mute the thread https://github.com/notifications/unsubscribe-auth/Au49t7MtCtn42nGzf4JJyYHQWKBdnLMVks5vc-RPgaJpZM4cWj_2 .
If I would mount a camera to a glass and just detect one eye, would that work too with your code?
Hi Antoine,
do you have an idea to modify the code to track just one eye?
Hi Antoine, I am working on raspberry pi3 and have followed each step as well as issues discussed over comments , but still getting delay of frame.
Hi @Blackhorse1988 Sorry for for the delay! To track with only one eye, for example the left one:
In def pupils_located(self):
, remove:
int(self.eye_right.pupil.x)
int(self.eye_right.pupil.y)
In def _analyze(self):
, remove:
self.eye_right = Eye(frame, landmarks, 1, self.calibration)
Then update horizontal_ratio()
, vertical_ratio()
and is_blinking()
so that the average of both eyes is not calculated.
Happy coding!
Hi @nishthachhabra
I don't know well about delays, it always worked fine with my laptop. I think it's the cv2.imshow()
in the example.py
that makes most of the delay. Do you really need to display the result? If yes, you can try to display images with something else. Try also the change the value passed to cv2.waitKey()
in the example.
Hi @antoinelame,
I'm having a problem similar to @Blackhorse1988, because most of the time the application only idendates winks and looks to match. In a few times he identifies looks to the right. I could not at any point identify looking up or down.
Hi @EderSant,
Yes you're right, the wink detection is perfectible and I will improve it. However, the "blinking state" doesn't mean that the program doesn't know if you are looking on the right or on the left. If you are using the demo, the display of "blinking" overrides the display of the gaze direction. In this case, you can just remove theses lines in example.py
:
if gaze.is_blinking():
text = "Blinking"
About the identification of up/down, I didn't write is_up()
and is_down()
functions because I think the margin of error is too large. But you can do it by using vertical_ratio()
.
Please new time open your own issue. 😉
well the eye tracking works fine when i set
with -2 no pupils are detected.
But the tracking doesn´t follow the pupils when I move just the eyes. When I move my head it works. Do you have an idea?