motioneye-project / motioneye

A web frontend for the motion daemon.
GNU General Public License v3.0
3.95k stars 650 forks source link

Feature Request: Calibration library #2598

Open lowlyocean opened 2 years ago

lowlyocean commented 2 years ago

When changing settings, it's not possible to replay past footage and see how the new settings would respond. Without this it's possible that you solve one problem but introduce a different one.

It would be great to be able to save off sections of video and tag them as "expecting motion detection" or "not expecting" into a calibration library. Each video might be a different circumstance: sunny day with clouds passing over reflective surfaces, or night-time with insects flying around the camera's IR LEDs, or windy day with heavy rainfall, and some footage of a person skulking about.

As the user changes a setting, it'd be tested against that calibration library's footage to confirm if the change has the tagged effect. The "calibration status" can be a shown (a percentage showing how many tests passed, or a series of green/red lights for each test).

And if you've already got a calibration library like this, then it seems feasible to have a button which automatically adjusts settings and converges on those that match the expectations tagged on the calibration library (effectively that becomes training data).

zagrim commented 2 years ago

I agree that something like that would be just awesome.

I'm sorry but I'm going to be depressing: Without having some pattern/object recognition engine that kind of thing - in my opinion - couldn't work nearly well enough make it worth the effort. MotionEye can't work miracles when Motion has those limits it has (and not being able to tell person moving in the yard in darkness from moths circling the camera lens is an example of that), but it works pretty amazingly with pretty limited computing resources, whereas object recognition takes a desktop/laptop to perform even adequately (but still not in real-time, which Motion does). The thing that IMO makes it impossible to do what you ask is that the algorithms used by Motion just can't make a distinction between things that produce equal amount of pixel difference from frame-to-frame. I recommend enabling debug media files to see what Motion "sees".

lowlyocean commented 2 years ago

I've been using debug media files, and they're helpful for solving one problem. But when you do it a second time (solving different problem), you might have "un-solved" the first problem. You'd only find out after the fact, and it's lots of unneeded back-and-forth.

I agree maybe the auto-calibration button might be too much effort for how simple motion is. But, at the very least, I think there is still incredible value in marking past videos for a camera as "test points" which you can then verify still work OK after you manually adjust some setting. That way you don't have the back-and-forth I describe above. Do you agree?

zagrim commented 2 years ago

I agree that some ability to test the config against known samples might make it easier to tune the settings. That would require, I guess, having some means to feed the samples back to Motion via a video stream, which surely is technically possible. The tests would perhaps be best to run using a separate motion process to make it safer to generate and switch configs from ME code? One big question is whether the test results could be somehow automatically collected, so that the user could have a report saying "expected sample X to trigger motion event, but it didn't" and "expected sample Y to not trigger motion event, but it did". If that ended up being too hard technically, the users would need to check the results themselves and it would probably be a bit tedious... :thinking:

lowlyocean commented 2 years ago

So to summarize, sounds like these would be next steps?

  1. Confirm how to replay previously saved off video back through motion. Might require enhancement on motion's side
  2. Figure out how to extract from the motion process whether or not a video segment contains a detection. If it involves fragile things like pattern matching a log, then an enhancement to motion might be needed to make that inter-process communication with motioneye more robust
  3. Add some entry to per-camera config that tracks which video files are marked by the UI as "test clips." Allow marking/unmarking videos from the screen that already allows you to watch saved videos. The mark should also tag if it's a clip where we're expecting detect or not-detect. The test clips should be excluded from any regularly scheduled cleanup/deletion
  4. Add a "Test Settings" button in the UI that would be next to "Apply Settings". On the backend/python, have it launch a new motion process for each test clip, using the hypothetical new config.
  5. Collect the results from each spawned motion process, and show it in a new panel that pops up when the user clicked "Test Settings" button. In the meantime the dialog would have just shown some progress meter or a spinner
zagrim commented 2 years ago

For # 1, one possibility could be to use software that allows streaming video from the command line, which would make it possible to do that part rather easily from Python code (e.g. mjpg-streamer, vlc). Availability via official repos to Raspbian might be an issue, though. Still I'd prefer this over trying to get Motion project to implement streaming server in motion (which they might not see as a good idea, although I have no idea how open they are for that kind of requests).

For # 2, that might also be possible to do by rewriting the output file names in the test config for each test clip and then observe the output paths. Not as nice as to get a clear signal from detection, but still an option. Another, maybe better option could be to have the test config contain a command to run on motion event (we don't anyway want to copy the user's normal webhooks/commands/etc in the test config so this should be perfectly ok) which somehow would indicate the result for the test runner (Python code).

As for the test clip library management, I don't have any more specific ideas right now.

One thing to note is that there are config settings that cannot be tested properly in a quick way, nor with a bunch of samples. At least smart mask and automatic threshold tuning are like that. Smart mask development can take anything from several minutes to even longer (the docs aren't saying anything specific), and it requires a constant feed of typical video, not a random selection of special cases. I guess there's not really a easy way to test those settings.