Closed spikeygg closed 1 week ago
Hi, few things:
http:frigate.ip.add.ress:5000/api/stats
As far as the actual issue, you are using the same feed for both detect and record roles which makes this more interesting. In the screenshot it seems to show the video as having 6 (or maybe 8?) seconds which is even less than 1 segment (10 seconds). This seems to tell me that something is going wrong, as I don't believe frigate does any trimming of the 10 second segments. Logs from an occurence of this should definitely add some context.
Hi, few things:
- Please provide logs
- Stats is accessed by using
http:frigate.ip.add.ress:5000/api/stats
As far as the actual issue, you are using the same feed for both detect and record roles which makes this more interesting. In the screenshot it seems to show the video as having 6 (or maybe 8?) seconds which is even less than 1 segment (10 seconds). This seems to tell me that something is going wrong, as I don't believe frigate does any trimming of the 10 second segments. Logs from an occurence of this should definitely add some context.
Thanks for the quick response. I've updated the API data in the original ticket from the address you supplied. I also tracked down the frigate log file and looked through it; I reported all the logfile detail from 3-30-22 at 8:00 until now. To my surprise there were no entries on 3-31-22 when the example event I reported was triggered by the system. I wonder if I need to increase the logging verbosity to see that? Should I add:
logger:
default: debug
to my config to see it? Any other suggestions?
I wouldn't turn on debug logging like that. There will be so much noise that you will never find the important messages.
Are the video segments always after the event when this happens? Does your RTMP feed get out of sync too?
I wouldn't turn on debug logging like that. There will be so much noise that you will never find the important messages.
Do you recommend I increase the debugging to try to capture more detail when this occurs? If so, please suggest the yaml loggers I should increase and I'll keep an eye on it for instances. I'm looking at the docs (https://docs.frigate.video/configuration/advanced) and there are quite a few options but I don't know which one we should focus on for this.
Are the video segments always after the event when this happens? Does your RTMP feed get out of sync too?
So far, I think all the times it has happened the video has been after the event -- I usually miss the action if the trigger was quick and gone.
I don't know how to tell if the RTMP feed gets out of sync (need more detail on what that means). However, I also have issues where one event trigger may show up multiple times. I was thinking it was an unrelated issue but it may stem from the same problem. I also have quite a few instances where a snapshot is captured but the recorded video clip reports as The media could not be loaded, either because the server or network failed or because the format is not supported.
When the event has duplicates usually the second snapshot is a slightly different frame (just 2-4 seconds later). In the duplicate case, sometimes the video is exactly the same as the dupe and sometimes only one of the dupes plays and the other gives me the media could not be loaded
error. Occasionally, neither duplicate will play their video.
I don't know if there is any debug logging that will tell you more. The default logging settings will show warnings when segments are discarded as corrupt. If there were errors, they would already be showing up.
When viewing the live view in frigate, is there a significant delay or is that close to real time?
The time of the recordings is determined based on the timestamp in the filenames written to cache by ffmpeg. There isn't any other way for frigate to know what time it was in the video other than when it received it. In your case, ffmpeg is reading a single stream, decoding it, sending it to frigate as raw frames and encoding it as h264 so it can be written to disk at the same time. Something must be getting delayed in one of those two places and the file writes are getting out of sync with when frame are sent for processing.
When viewing the live view in frigate, is there a significant delay or is that close to real time?
I just checked and all the cameras are between 0-4 seconds behind realtime. All cameras are wired via gigabit PoE except for one which is WiFi and surprisingly that WiFi camera was only 1 second behind realtime. They all sync their times via NTP nightly so they should all be very close to accurate.
The time of the recordings is determined based on the timestamp in the filenames written to cache by ffmpeg. There isn't any other way for frigate to know what time it was in the video other than when it received it. In your case, ffmpeg is reading a single stream, decoding it, sending it to frigate as raw frames and encoding it as h264 so it can be written to disk at the same time. Something must be getting delayed in one of those two places and the file writes are getting out of sync with when frame are sent for processing.
Wow, that sounds complex but if there are two places at least it narrows it down. Is there a way I can manually test one or both of those to see if I can determine how/why?
Can you change your record config to retain all segments for the first 2 days to see if this still happens? I don't know if it's because the timing is off or if the segments are not being retained when they should.
record:
enabled: True
retain:
days: 2
mode: all
events:
retain:
default: 10
mode: active_objects
Can you change your record config to retain all segments for the first 2 days to see if this still happens?
I'm on it! Just updated the config.yml and restarted.
Looks like it's still occurring. :(
Here's a capture from 10:37:59 to 10:38:19 and the snapshot shows 10:38:06:
However, when I click play the first frame starts at 10:38:12:
The timeline for the video only shows 5 seconds for a 30s event which suggests that it's missing segments altogether. Can you look in the recordings directory for this event and see what files are in there? Also, do you have warning messages in the logs about segments being discarded?
The timeline for the video only shows 5 seconds for a 30s event which suggests that it's missing segments altogether. Can you look in the recordings directory for this event and see what files are in there? Also, do you have warning messages in the logs about segments being discarded?
I see a few recordings that appear to have landed at 10:38AM on that camera feed, they are all about 27 seconds:
The only log entries I see are like this:
{"log":"[2022-04-01 21:58:20] ws4py INFO : Managing websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:36990]\n","stream":"stderr","time":"2022-04-02T03:58:20.412520347Z"}
{"log":"[2022-04-01 21:58:54] ws4py INFO : Terminating websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:36990]\n","stream":"stderr","time":"2022-04-02T03:58:54.980982969Z"}
{"log":"[2022-04-02 11:16:11] ws4py INFO : Managing websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:42834]\n","stream":"stderr","time":"2022-04-02T17:16:11.971450362Z"}
{"log":"[2022-04-02 11:17:48] ws4py INFO : Terminating websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:42834]\n","stream":"stderr","time":"2022-04-02T17:17:48.387128227Z"}
{"log":"[2022-04-02 13:20:03] ws4py INFO : Managing websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:56028]\n","stream":"stderr","time":"2022-04-02T19:20:03.308793164Z"}
{"log":"[2022-04-02 13:21:49] ws4py INFO : Terminating websocket [Local =\u003e 127.0.0.1:5002 | Remote =\u003e 127.0.0.1:56028]\n","stream":"stderr","time":"2022-04-02T19:21:49.50565259Z"}
When you playback those files, how do the filenames line up with the timestamp from the camera? Are some parts missing? With retain mode set to all
, you shouldn't have any missing video.
When you playback those files, how do the filenames line up with the timestamp from the camera? Are some parts missing? With retain mode set to
all
, you shouldn't have any missing video.
Okay, when I play back the recordings I see this:
2022-04/02/10/out_the_back/37.46.mp4
- camera's timestamp range is 37:44-38:12
2022-04/02/10/out_the_back/38.14.mp4
- camera's timestamp range is 38:12-38:39
2022-04/02/10/out_the_back/38.41.mp4
- camera's timestamp range is 38:40-39:07
there might be some split-seconds dropped between recordings but overall it looks pretty seamless and complete, at least from the recordings directory standpoint.
That seems like a reasonable difference, so I think the problem must be elsewhere. I wonder if there is some kind of error when reading the duration of these files. Do you mind uploading to google drive and sharing with blake@frigate.video?
That seems like a reasonable difference, so I think the problem must be elsewhere. I wonder if there is some kind of error when reading the duration of these files. Do you mind uploading to google drive and sharing with blake@frigate.video?
Done, you should have gotten a notification email or something. Thanks for taking a look! Lemme know if I can help with anything else.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@spikeygg Have you found any solution for this? I seem to have the same issue, snapshots and clips are way out of sync.
Same here after moving to a less beefy server, but CPU isn't an issue.
Same here but it is linked to the output_args to re-encode mjpeg stream to H.264.
output_args: record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -an rtmp: -c:v libx264 -an -f flv
Without that, clips are ok (but not playable in UI). With it, clips are "after" the motion event :(
Try these args to tell ffmpeg to prioritize realtime encoding:
output_args:
record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v libx264 -tune zerolatency -preset ultrafast -an
rtmp: -c:v libx264 -tune zerolatency -preset ultrafast -an -f flv
Describe the problem you are having
I've been using Frigate for almost a year now. I really like that Nvidia CUDA is supported via docker container (I use that one). However, I've noticed that occasionally an event will be caught and the snapshot shows the trigger (like a person, cat or dog) but when I view the recorded video clip it doesn't contain those frames. Often, the event was completely missed! This happens pretty regularly on my setup. Here's an example.
Snapshot shows "16:44:51":
First frame of video shows "16:44:58":
Last frame of video shows "16:45:05":
Version
0.10.1-83481AF
Frigate config file
Relevant log output
FFprobe output from your camera
Frigate stats
Operating system
Other Linux
Install method
Docker Compose
Coral version
USB
Network connection
Wired
Camera make and model
reolink
Any other information that may be helpful
No response