pperle / gaze-tracking-pipeline

full camera-to-screen gaze tracking pipeline
72 stars 19 forks source link

use of calibration data #1

Open enrico310786 opened 2 years ago

enrico310786 commented 2 years ago

Hi, I cannot understand how the calibration data obtained with the main.py script in the gaze-data-collection project is used in this project. In that case a .csv file is produced and the calibration images are saved. How and where are this data used to optimize the gaze vector projection from the 3D space to the point on the monitor on the 2D space? The paper "Efficiency in Real-time Webcam Gaze Tracking" talks about three ways to perform this type of monitor calibration. Geometric, machine learning and hybrid? What kind of optimization do you apply?

Thanks

pperle commented 2 years ago

Hello @enrico310786,

in this code, I have set a virtual screen just below the camera without any rotation. I recommend calibrating the position of the screen as described by Takahashiet et al., which provide an OpenCV and MATLAB implementation, for a better accuracy. In the paper you mention, the authors call this approach "classical geometry-based mode". They also propose using a "mirror-based calibration technique" but they use the method discussed by Rodrigues et al..

idrovi commented 2 years ago

Hi, I case of calibrating the position of the screen. How it is used in your code?. Is a modification of the method "plane_equation" required in case of rotation between the screen and the camera?

TODO load calibrated screen position plane = plane_equation(np.eye(3), np.asarray([[0], [0], [0]])) plane_w = plane[0:3] plane_b = plane[3]

Thanks

hshahid commented 11 months ago

@enrico310786 Did you manage to figure this out? I'm having the same issue