Open 1j01 opened 7 months ago
Maybe it should only make adjustments in the direction that you're already moving,
i.e. if sign(delta_x) = sign(x_implied_by_tilt - x)
I think that would help with the sleight of hand, although I'm still not sure if sleight of hand is desirable.
If nothing else, it would be an interesting experiment!
Currently, the user's face is detected in order to place tracking points, and optical flow is then used to track these points on your face, and only the optical flow influences the final cursor movement.
In this existing scheme:
What if we auto-calibrated based on the head tracker's face orientation, perhaps making adjustments only during otherwise-detected movement? Like a perpetual motion machine's secret magnetic "kick", or a magician's slight of hand, it would subtly adjust the mouse position so that it ends up at the edges of the screen when tilted a certain amount, and centered when facing forward.
Drawbacks:
Formula: Assuming the tilt can be normalized, or assuming the camera and screen and face are directly in line with each other, a formula for the other part might be fairly simple, something like
adjusted_x = x + (abs(delta_x) * auto_calibrate_strength * (x_implied_by_tilt - x)
wherex_implied_by_tilt
is the x position on the screen that would be mapped purely from the head tilt wheredelta_x = x - x_previous
wherex
is the latest x position from optical flow tracking It's probably a little more complicated, like maybe thedelta_x
factor should be raised to some power, or clamped, etc.On the plus side, it could be made so that with
auto_calibrate_strength = 1
, it purely uses the head tilt, so a separate head tilt mode wouldn't be needed. (Again, I'm not sure about thedelta_x
part in regard to this.)