guardianproject / haven

Haven is for people who need a way to protect their personal spaces and possessions without compromising their own privacy, through an Android app and on-device sensors
https://guardianproject.github.io/haven/
GNU General Public License v3.0
6.62k stars 728 forks source link

Motion detection sensitivity can be confusing #303

Open harlo opened 6 years ago

harlo commented 6 years ago

Some testers were not sure how to set the motion detection sensitivity threshold. Given that there's no easy-to-grasp unit of measurement, perhaps the scale should be "more to less" rather than a number.

deviantollam commented 6 years ago

see my post here... https://github.com/guardianproject/haven/issues/275#issuecomment-415629688 ...in issue #275 perhaps?

TL:DR ... during the preview screen, a small numerical overlay showing what "current value" of "changed pixels" (is that the word i want to use?) that the camera is registering in that moment can help users test and establish a baseline when configuring sensitivity.

deviantollam commented 6 years ago

i'm encountering more difficulty these days with the "camera sensitivity" setting.

it seems REALLY difficult to find a sweet spot. i'm having a whole Goldilocks thing going on here. slightly too sensitive means that the phone CONSTANTLY detects movement (even when there is no movement) and it is recording and alerting me constantly.

so i tweak it to be a little bit less sensitive, resulting in utterly no motion ever being detected unless i get RIGHT up in the phone's face and move across the entire camera.

I'd love to see a little more input as to exactly how the Haven app "detects" changes in the camera's field of view. During the preview screens (when calibrating the sensitivity or when preparing to start Haven running) users see the "yellow pixels" which apparently indicate Haven seeing a "change" in that pixel, yes? That makes me want to ask: is this really how the detection is done? pixel-by-pixel? that seems like a recipe for a lot of false positives.

i don't know how feasible it would be, but i'd love to know if things would be improved if the phone considered the camera's field of view in terms of 10x10 pixel blocks. and for "movement" to be detected in any given block... some percentage of the 100 pixels in that block must experience a change in state.

i'm in a hotel here and testing the Haven app on a few phones and it's proving REALLY difficult to get it to play nicely. :-/

n8fr8 commented 6 years ago

I think the update to the new Camera2 API also increased sensitivity, but we didn't give enough controls to tamper it down.

I do like the idea of increasing the block size for a region. Currently, we do pixel by pixel, but the threshold controls the amount of pixels that need to have a significant change to trigger an alert.

The other feature that would help on sensitivity would be to block out or select certain regions of the view, so that you don't get triggered by a window or reflection, if all you care about is the main door.

deviantollam commented 6 years ago

Masking particular zones is a nice feature idea, but more suited to fixed placement cameras like ones mounted to walls. Not phones that move around.

finding a way to make the sensitivity tuning easier on that user I expect would pay greater dividends as a first priority. And it would also incorporate additional similar benefits even for a fixed placement solution or a situation where someone is only trying to trigger on a particular door or other things.