The use of algorithm view is necessary to adjust the pupil intensity threshold.
I think there would be added value to return the algorithm debug info as objects rather than embedding it in a color image.
The advantages would be :
avoid having to convert to rgb when the native frame is not rgb (grayscale, yuv, ...), for instance just np.tile to convert from our gray input to rgb is making a 250 fps to drop dramatically, causing frame loss/lags/overflow. ( Can't use numpy tricks as the array needs to be c-contiguous.)
display debug in pupil by drawing over the captured image, prettier when windows scaled, and likely faster than rasterizing on CPU.
allowing external program to analyze the debug info.
The use of algorithm view is necessary to adjust the pupil intensity threshold. I think there would be added value to return the algorithm debug info as objects rather than embedding it in a color image. The advantages would be :
np.tile
to convert from our gray input to rgb is making a 250 fps to drop dramatically, causing frame loss/lags/overflow. ( Can't use numpy tricks as the array needs to be c-contiguous.)WDYT?