I'm helping out someone who's trying to run a simple eyetracking experiment using Pygaze with a Tobii TX300. They were a bit confused by the data their experiment produced, since the GazePointX and GazePointY values recorded were often outside the 0.0-1.0 range that the tracker should produce (not sure if this is common across all makes of tracker but Tobii records coordinates in a 0.0-1.0 range where (0.0, 0.0) represents the top-left and (1.0, 1.0) the bottom right).
Examining the libtobii source code, it seems that someone already noticed an issue with how these values were calculated and recorded it as a comment, but wasn't sure enough to apply the fix. Currently when there are valid samples from both eyes, GazePointX and GazePointY are calculated as:
# if we have both samples, use both samples
else:
# shouldn't these additions be divided by 2?
ave = (g.LeftGazePoint2D.x+g.RightGazePoint2D.x,
g.LeftGazePoint2D.y+g.RightGazePoint2D.y)
but I agree with the comment that they should instead be:
# if we have both samples, use both samples
else:
ave = ((g.LeftGazePoint2D.x + g.RightGazePoint2D.x) / 2.0,
(g.LeftGazePoint2D.y + g.RightGazePoint2D.y) / 2.0)
this makes the data from the current experiment look much more sensible. I've included a very small sample of (uncorrected) data, note the difference between samples with both eyes valid and the initial samples with only one eye valid
I'm helping out someone who's trying to run a simple eyetracking experiment using Pygaze with a Tobii TX300. They were a bit confused by the data their experiment produced, since the GazePointX and GazePointY values recorded were often outside the 0.0-1.0 range that the tracker should produce (not sure if this is common across all makes of tracker but Tobii records coordinates in a 0.0-1.0 range where (0.0, 0.0) represents the top-left and (1.0, 1.0) the bottom right).
Examining the libtobii source code, it seems that someone already noticed an issue with how these values were calculated and recorded it as a comment, but wasn't sure enough to apply the fix. Currently when there are valid samples from both eyes, GazePointX and GazePointY are calculated as:
but I agree with the comment that they should instead be:
this makes the data from the current experiment look much more sensible. I've included a very small sample of (uncorrected) data, note the difference between samples with both eyes valid and the initial samples with only one eye valid