GPUImageMotionDetector works great except one thing—when you move camera it detects that all objects (edges of objects) in the frame have been moved. But physically they don't. I'm seeking for a way to filter out these movements and leave only those objects that really moves.
I thinking of using one of the corners determination algorithms (for example, Harris Corner Detection) to build anchor points and analyse their movement in the frame. Or alternatively, use the gyroscope data to obtain the vector of device motion and filtering changes in the frame with a similar vector.
These approaches seems a bit of clumsy. So I would like to know how to do it in a simpler and more obvious way.
Note: I do not need a good quality of motion recognition.
GPUImageMotionDetector
works great except one thing—when you move camera it detects that all objects (edges of objects) in the frame have been moved. But physically they don't. I'm seeking for a way to filter out these movements and leave only those objects that really moves.I thinking of using one of the corners determination algorithms (for example, Harris Corner Detection) to build anchor points and analyse their movement in the frame. Or alternatively, use the gyroscope data to obtain the vector of device motion and filtering changes in the frame with a similar vector.
These approaches seems a bit of clumsy. So I would like to know how to do it in a simpler and more obvious way.
Note: I do not need a good quality of motion recognition.