Closed euquiq closed 3 years ago
Hey @euquiq - have you done any further investigation into this? I've not had chance yet.
The code shouldn't leak any snapshots. Even with your recent changes, it pulls a fresh snapshot every time from the API. This suggests an issue with SSS and not SynoAI, but I'll test in DSM 6 when I get a couple of hours free.
Nope, I mean: As said: I cannot use "High" quality because apparently SSS starts choking up on the snapshots and gives me -most of the time- the same snapshot, even when tie motion detection event is 20 seconds apart. I really doubt this is a problem on SinoAI side. I will raise a ticket on Synology Support and we'll see what do they say about it. I will give them a link to this issue...
Update: They are asking to have remote access into my Synology.
Thank you for contacting Synology Support.
I understand you are having issues snapshot returning the same image through API.
Could you please provide us remote access, so I can escalate this case to our Tier 2 Team.
I will send you request in a separate response.
I have yet to decide if I am ok with this... Also I would think it is easy enough to try to reproduce in any DSM 7 unit, for them.
I will keep this thread updated...
Enrique
Perhaps, but the Synology tech support team are pretty good. I wouldn't be adverse to letting them have a poke around my NAS personally.
Ok, I gave them access, and the ticket has been elevated into "TIER 2". One would wonder what may they do inside the NAS ... maybe they have a custom testing app, or someting along those lines ? Let's wait and see ...
Maybe they are actually interested on SynoAI and it's performance :)
Update: I just gave Synology a fresh real case, which is available for them to study inside my NAS:
It just happened that Synology API went "south" not for just two motion triggered snapshots, but for several ones: As my family was getting out of the house and into the car, taking the car out, etc, it was a several minutes long motion, which triggered lots of snapshots (each one spaced 20 seconds between) ... and ALL images show the "first snapshot".
When I check the actual saved motion video, saved on my Synology, the whole thing is just fine, so there is "No excuse" for the API not to grab the actual frame for each time-frame. So, I am quite convinced we did catch a real bug inside the API.
Well, that's somewhat exciting news!
Update: They were not "entirely sure" this was not a problem inside SynoAI, so they asked me to set up a second triggering event rule, but instead of "communicating the event into SynoAI", to choose "take Snapshots".
This would allow them to confirm that the problem exists "entirely inside the Synology Surveillance Station realm".
Done it. Same problem: While the triggering event is set up to actuate each 20 seconds, it just returned the same frame MOST OF THE TIME (and not only twice, but for several times) ... I would say that only 1 frame in 10 frames, would be actually "refreshed" showing the real-time correct, snapshot.
So there you go ... There seems to be a (nasty) bug inside SSS, or alternatively something really weird going on with my camera / setup.
Let´s see what do they answer, upon reviewing the requested "all SSS scenario"
Excellent, well at least that confirms there's nothing we can do our end! Thanks for confirming that with them.
UPDATE!
The Snapshot function retrieves only I-FRAMES (when the full frame is sent by the camera, over the stream).
This may NOT happen as frequently as one would think.
On DAHUA brand cameras, there is a setting switch called "SMART CODEC" which I had in the "ON" position ... and the devs told me that such setting would be extending the time between each I-FRAME even more.
I turned off the "SMART CODEC" and voilá ... Snapshot Frames started to be consistent with time-stamp / realtime.
This is an interesting learning for me. I would not have guessed the I-FRAME limitation on the Snapshot API CALL, as I could see the video "moving" fluidly as if the "frames are there" but then, the complexity behind the video playback seems to be a major thing, and it behaves different when you ask just a snapshot.
Interesting, so the snapshot command actually uses the camera's functionality and not the SurveillanceStation data?
I wouldn't know the details (they told me just about what I posted in here) ... but I would suspect not, just because the way a snapshot is delivered, it is too fast ... it really seems to be served from Synology's video storage.
Also I would suspect they store the camera video as it comes ... So they get the famous I-Frame, then the bunch of in-between frames, until another I-Frame is sent ... Thing is that when video is in playback mode, the algorithm "just works" with the flow and you "see" the whole frames just fine (it's the working basis for H264 video).
BUT If you are asking for "just a snapshot" (and really fast), Synology's API is resourcing to the "easy path": just retrieve the most recent I-frame.
Otherwise, it would need to do some heavy work: It would neet to look back in time, find the famous I-Frame, and instead of just return that frame, it would need to "play the video" so to reconstruct the video sequence, from THAT I-frame, up to the "timeframe" in at the the time you asked the snapshot. For a start, this "honest" approach, I would suspect it takes far more time, than just fetch the I_frame and return it.
Confirmed, it is working fine.
I set up on SSS the trigger interval to 20 seconds. I got DeepStack on High mode. It takes about 1.3 to 1.5 seconds to analyze an image.
Considering the time to grab the snapshot, etc. I am asking SynoAI to grab up to 12 sequential snapshots when it receives the motion triggering.
It is detecting persons, in high mode, that otherwise, on Medium or -worse- LOW mode could not detect. There are false positives, yes, but minimal. It is also missing some extreme cases of people. Yet ... it is finding some people that makes my jaw drop. I will include some images (but they are 3mpx, so they will be big).
Since SynoAI now is asking for High resolution images, those are far better for recognizing clothes / even eventually faces.
We can close this "issue" but maybe it is wise to comment somewhere that Synology takes those snapshots from the I-Frame, which can be "inconvenient" ... specially if the camera is "DAHUA" and the infamous "SMART CODEC" is ON.
First frame is in full, so to give an idea of camera distance and general frame:
(yep, that's a guy, climbing into the garbage container)
One more capture ... today I have a communal fair installed on my street:
FRENTE: 6 persons
If you can write something "friendly" up for me explaining the i-frame issue, I'll add it to the readme.
See if any of these two explanations may be of use:
Security cameras video stream includes I, P and B frames. An I-Frame is like a jpg image, holding a complete snapshot of the scene. Between each I-Frame, the camera sends a bunch of P and B frames, each one holding only those parts of the scene that had any change / movement along time. I.e: In still scenes this saves lots of bandwidth.
At play back time, the video algorithm reconstructs the scene over time by first placing the I-frame as background and then composing each successive P or B frames on top, as time advances.
When SynoAI requests a snapshot from your NAS, Synology API just fetches the latest I-Frame. While this action is fast and simple, -depending on your camera brand and configuration- the most recent I-Frame may be several seconds old and may not even include the actual moving object!
Some cameras are quite savvy on their bandwidth by really stretching the time between each I-Frame sent. I.e. DAHUA cameras brand got a configuration setting, labeled "SMART CODEC" which does just that when "ON". If this is your case, you should turn this "OFF", otherwise SynoAI may be fed old snapshots!
Security cameras video stream includes I, P and B frames. See: Wikipedia.
When SynoAI requests a snapshot from your NAS, Synology API just fetches the latest I-Frame. Depending on your camera brand and configuration the most recent I-Frame may be several seconds old and not even show the actual moving object!
Some cameras are quite savvy on their bandwidth by really stretching the time between each I-Frame sent. I.e. DAHUA cameras brand got a configuration setting, labeled "SMART CODEC" which does just that when "ON". If this is your case, you should turn this "OFF", otherwise SynoAI may be fed old snapshots!
Thanks - added to the readme
This is a weird error. At first sight, it seems to be the SYNOLOGY API at fault:
I am asking SSS to send me a trigger event each 20 seconds.
Upon each trigger, SynoAI asks for a snapshot and process it.
Since there is a communal fair going on in front of my house, I get a person object or more being found each time (it does not find EVERY person, tho, even if they are quite obvious objects, but that is a Deepstack problem).
EDIT: This does not happen EVERY time ... This behavior happens once every two or three triggers.
Here is the catch: Even while each snapshot is asked with 20 seconds separation time, I GET THE SAME FRAME being returned by SynoAI, with the same camera-timestamp on it.
I can confirm the camera is fluidly refreshing the video stream at 9 fps. So by no chance in hell it should be the same frame with 20 seconds separation.
Here is an "Information" log:
Those two frames, ending in _232 and _415 are exactly the same with the same timestamp.
Now comes the second catch: This happens only if I ask for "High" Quality Snapshot (3 mpx image at 2304 x 1296 res) !
If I go back into "Balanced" quality (640 x 480 image) THINGS GO BACK WORKING FINE!: I get one new frame each 20 seconds, with the correct camera-timestamp, spaced by 20 seconds as it should be!
I am trying to understand what is going on, and everything points into Synology's API just sending the same snapshot, even when asked 20 seconds apart (which seems to be insane)!
If I ask SynoAI to SaveOriginalSnapshot ... again: on Quality "High" I can corroborate that the original snapshots are the same even when asked 20 seconds apart one from the other.
I find it very difficult for this to be a SynoAI bug ... but maybe the bytes[] array is not being disposed somehow and the same snapshot is being used, until some internal .net framework "garbage collector" takes care of it ? (or something weird like that ) ? I cannot see a flaw in the code, but ... I find that a bug so gross from Synology's "flagship" product (Surveillance Station) is almost unfathomable. Having said that, I am using DSM 7 which is "new".