Closed MV10 closed 4 years ago
I tried to decouple recording-start from recording-stop (and motion detection disable/enable) by using token registration as events calling some local functions. That doesn't fix the problem, even though it gets rid of the async lambda and exits onDetect
without re-enabling motion detection. (Edit: I removed the using
to dispose the CTS, Stephen Cleary says it's unnecessary as long as you ensure the token is cancelled.)
var startRecordingCTS = LocalPrepareToRecord();
await cam.WithMotionDetection(
motionCircularBufferCaptureHandler,
motionConfig,
() => { startRecordingCTS.Cancel(); })
.ProcessAsync(cam.Camera.VideoPort, motionCTS.Token);
CancellationTokenSource LocalPrepareToRecord()
{
var recordCTS = new CancellationTokenSource();
recordCTS .Token.Register(LocalStartRecording);
return recordCTS ;
}
async void LocalStartRecording()
{
motionCircularBufferCaptureHandler.DisableMotionDetection();
vidCaptureHandler.StartRecording();
vidEncoder.RequestIFrame();
var recordingCTS = new CancellationTokenSource();
recordingCTS.Token.Register(LocalEndRecording);
recordingCTS.CancelAfter(recordSeconds * 1000);
await Task.WhenAny(new Task[]
{
motionCTS.Token.AsTask(),
recordingCTS.Token.AsTask()
});
if (!recordingCTS.IsCancellationRequested) recordingCTS.Cancel();
}
void LocalEndRecording()
{
startRecordingCTS = LocalPrepareToRecord();
motionCircularBufferCaptureHandler.EnableMotionDetection();
vidCaptureHandler.StopRecording();
vidCaptureHandler.Split();
}
I appreciate the issue you've got here and it's slipped my mind when developing this piece of functionality, although I'd like to reiterate that the motion detection work I did is very experimental and is by no means production ready code, it was simply a learning exercise which I'm sure can be heavily improved. Having said that, I'd rather not over-complicate things and suggest instead to do the following:
1) Modify the CircularBufferCaptureHandler
class, adding another constructor which just accepts a bufferSize
parameter and passes empty strings through to the base class.
2) Edit the FileStreamCaptureHandler
class and detect when empty strings have been passed through so it just acts as a dummy capture handler and won't do anything when the data is to be processed. A warning can be logged to notify the user they've not configured the capture handler with directory/filename data and will act as a dummy handler.
Once this is done, I think that in your scenario the capture handler working with the raw stream should behave correctly and only the handler responsible for the H.264 stream will actually record anything. Of course I'd need to test this but in my head it sounds logical.
What do you think?
Is is perhaps bad to call DisableMotionDetection and EnableMotionDetection within onDetect?
Do you mean in case the capture handler is out of scope and has been disposed? I guess that depends on how you've initialised your variables and handled their lifetimes. The example in the wiki shouldn't have an issue with that I don't think but I'm sure if you wanted to break it you could. Happy to hear alternative solutions.
Thanks, I'll try out those suggested changes. Maybe Sunday. I was already planning to learn more deeply about the motion capture feature so that looks like an easy place to start digging.
What I meant about Disable/Enable was just speculation that maybe something expected mocap to be disabled after onDetect
runs. Just random guesswork on my part.
Ian, I think I recall reading that you were considering the use of motion vectors as part of the motion detection process. While modifying VideoStreamCaptureHandler
, I was trying to decide whether there was any benefit to capturing vectors when the video stream itself was not to be captured, which reminded me of your comment (assuming it actually was you).
After doing some reading on the subject, I found an OpenCV discussion indicating that the processing overhead was too great to be of any use to their existing motion detection system. Apparently data quality is relatively poor for this purpose since it is oriented towards maximizing compression rather than "true" motion detection. Artifacts such as camera-shake are motion in the compression sense but not in the physical object sense. They did say it could potentially speed up early-fail (e.g. no motion in a given area), however.
Figured I'd mention it, in case you were on the fence about that approach. With regards to the handler, I still don't see value in actually storing it to a file without storing the video data as well.
Hi Jon. I was intending on allowing the user to choose whether they wanted to use frame difference or motion vectors for motion detection. I had hoped that using motion vectors seen as though they're outputted by the camera might actually have a lower processing overhead than frame difference - please could you provide the discussion in question? It's relatively low on my priority of things to do at the moment and is scheduled for v1.0.
That particular discussion wasn't very technical, but here it is. I thought each of the answers was interesting.
Regarding recording the raw stream, the changes you suggested went smoothly, but I'm not going to PR it for now, it just leads to the other problem I mentioned -- onDetect
firing over and over again. I will dig into the motion detection code to try figuring out what's causing that.
I think I found the problem with frame diff motion detection. ~When you disable and re-enable detection, it has the full frame from whenever you disabled it, and I think that's sometimes different enough (maybe lighting etc) to trigger detection again. This also explains what I saw in some tests where there were occasional slight delays before detection fired incorrectly (with no change in scene, again the camera is pointing at a blank wall), whereas other times (most of the time) it fires off immediately. Since the code then disables detection again, I think that original frame never goes away.~
When I change the circular buffer's DisableMotionDetection
method to call ((FrameDiffAnalyser)_analyser).ResetAnalyser()
so that the frame diff class has to wait for a new full-frame, everything seems to work as expected.
It doesn't completely make sense to me that recording the raw buffer doesn't exhibit the same behavior but I haven't looked too closely at the recording side yet. Maybe that part makes more sense to you based on this finding?
I also wonder if it would help detection to immediately request an I-frame when detection is re-enabled?
Hmm, I see I'm working from some incorrect assumptions there. I don't think it ever updates that initial frame again, does it? The motion detection processes I've seen watch for changes over some (short) period of time. I think the problem with grabbing that initial frame and never updating it is that you can't leave motion detection running for very long -- looking out a window, for example, where shadows will change slowly throughout the day.
What I think actually happens is disabling motion detection interrupts building the comparison frame, and re-enabling it without a reset begins adding to that comparison buffer again with whatever random data the stream happens to be sending, and that mismatched buffer looks like a motion event. Same fix, reset that buffer.
The resetting of the test frame is done within the CircularBufferCaptureHandler's CheckRecordingProgress
method via the ResetAnalyser
call. Is that not being called in your case? There may be a bug in this area if not.
I suspect what's going wrong is this:
if (_recordingElapsed != null && _config != null)
When you're not recording anything _recordingElapsed
is going to be null and therefore the test frame isn't being reset. I need to have a think of how to modify this to suit your needs.
Yes, that explains it!
I was already planning to propose a test-frame refresh duration for MotionConfig
. I'd be happy to implement and PR that with my other no-raw-recording changes, if you like. A Stopwatch
object is efficient/cheap, I could add another to periodically reset when detection is active, completely independent of recording logic.
Yes that would be great, thank you. I would be interested to know how the motion detection stuff works for you in practice too. Maybe once you've implemented these changes it may be more suitable to scenarios where lighting changes more frequently.
I expect it to work well. I played around without any recording at all, just walking in and out of view in my office with detection messages dumped to the screen, and it seemed to do just fine.
The challenge is the outdoors, of course.
For about ten years we've had 16 IP cameras and a PC-based DVR / NAS setup running here at the house. That's what I'm looking to replace, so I certainly have a good basis for comparison. Assuming you don't object to the suggestion in that other thread, I want to look into adding a masking overlay, that'll be essential to real-world usage. All but two of our IP cameras face the outdoors (or are outdoors) and trees, bushes, traffic -- all those things must be masked.
But one thing at a time.
I've got a few things to tweak and test, but it's generally working. I set it up to "steal" the latest full frame rather than the overkill approach of calling reset. I like the way that frame buffer works (FrameAnalyser
). Simple and effective.
2020-07-23 15:30:39.604 -04:00 [DBG] Have full test frame.
2020-07-23 15:30:39.604 -04:00 [DBG] Clearing frame
2020-07-23 15:30:39.605 -04:00 [DBG] Have full frame, updating test frame.
The wiki entry for motion capture demonstrates saving both the h.264 stream and the large raw motion detection stream. That's working normally for me. Since I don't want the raw data saved, I removed the raw recording start/stop/split commands. However, recording duration is controlled by the
MotionConfig.RecordDuration
property which only applies to the raw stream. Without recording the raw stream, the h.264 stream records forever, theonStopDetect
callback is never executed.Although it isn't how I'd implement motion capture recording "for real", for the sake of experimentation I changed the
onDetect
callback into an async lambda. This way I can await token timeouts to control h.264 recording time. It's in theusing
block in the middle of the code below. All the handler/port setup before that code is straight out of the wiki.However, after the first motion event occurs and recording ends,
onDetect
is always fired again immediately (so it records again, then it fires again immediately, etc). The camera is viewing a very static scene (a wall in my office), the only initial motion trigger is me waving my hand in view of the camera.Is is perhaps bad to call
DisableMotionDetection
andEnableMotionDetection
withinonDetect
?