OpenVisualCloud / Smart-City-Sample

The smart city reference pipeline shows how to integrate various media building blocks, with analytics powered by the OpenVINO™ Toolkit, for traffic or stadium sensing, analytics and management tasks.
BSD 3-Clause "New" or "Revised" License
186 stars 80 forks source link

Providing Video file as sensor and with start time of the video #796

Closed divdaisymuffin closed 2 years ago

divdaisymuffin commented 2 years ago

Hi @nnshah1 and @xwu2git

The current video simulation works with taking system time as start time of video, but our target is to get actual video start time to be taken, which means we need analytics index data to be contain timestamp of video start timestamp not current system time, similar the recording index.

We have already tried to replace the system time in mqtt2db.py and upload.py file by giving this as a input to yaml, and taking it as a environmental variable. But we are facing issues with lag in bounding box creation they are not well sync.

Is there a way via VAserving we can take start time directly ? or any other possible option you can suggest.

nnshah1 commented 2 years ago

@divdaisymuffin Can you share the changes you made in mqtt2db.py?

nnshah1 commented 2 years ago

If you adjust the real_base here to be the video_base:

https://github.com/OpenVisualCloud/Smart-City-Sample/blob/b774b2b494a5ea8bb79b4f3b64ac1af433e06934/analytics/mqtt2db/mqtt2db.py#L75

And change this to be:

https://github.com/OpenVisualCloud/Smart-City-Sample/blob/b774b2b494a5ea8bb79b4f3b64ac1af433e06934/analytics/common/rec2db.py#L34

                "time":str(int(int(os.path.basename(filename).split('_')[-1])+video_base/1000000)),

I believe that should adjust the timestamps to be based on the video_base as opposed to the current time.

This can be done in vaserving directly - but would have the same basic effect - is this the adjustment you already have made? If so can you give an example of the lag in bounding boxes in the visualization?

divdaisymuffin commented 2 years ago

Hi @nnshah1

The changes we made in mqtt2db.py are as follows for sync with video time : startDate and botstartTime we are taking as input in a index

`if ("time" not in r) and ("real_base" in r) and ("timestamp" in r): real_base=r["real_base"] if "real_base" in r else 0

            if sensorType == 2:        
                if startDate != None:    
                    print("Bot deployed Video File as a Sensor", flush = True)
                    print(f"sensorDetails ---> sensorName {sensor_name} sensorType---> {sensorType} startDate---> {startDate}", flush=True)
                    r["time"] = int(startDate) + int(time.time() * 1000) - botStartTime # for simulation/video file only
                    print("from runva timesTamp ", r["time"], flush = True)           
            else:
                print("Bot deployed in live sensor", flush = True)
                print(f"sensorDetails ---> sensorName {sensor_name} sensorType---> {sensorType} startDate---> {startDate}", flush=True)
                r["time"] = int((real_base + r["timestamp"]) / 1000000) # for live streaming only
                print("r_time", r["time"], r["timestamp"], real_base, time.time())`

For recording time we made changes in upload.py:

`if sensorType == 2: mp4file=mp4path+"/"+str(timestamp)+".mp4"

                # perform a straight copy to fix negative timestamp for chrome
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))

                sinfo=probe(mp4file)
                print("Sinfo = ", sinfo, flush=True)
                videoFileDuration = int(sinfo["duration"] * 1000)
                timesTamp = timestamp + videoFileDuration

                mp4file=mp4path+"/"+str(timesTamp)+".mp4"
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))
                sinfo=probe(mp4file)
                print("Sinfo2 = ", sinfo, flush=True)

                sinfo.update({
                    "sensor": sensor,
                    "office": {
                        "lat": office[0],
                        "lon": office[1],
                    },
                    "kpiId":kpis_id,
                    "botConfigId":botConfig_id,
                    "botName":algorithm,
                    "time": timesTamp,
                    "path": mp4file[len(self._storage)+1:]
                })

            else:
                mp4file=mp4path+"/"+str(timestamp)+".mp4"

                # perform a straight copy to fix negative timestamp for chrome
                list(run(["/usr/local/bin/ffmpeg","-f","mp4","-i",path,"-c","copy",mp4file]))

                sinfo=probe(mp4file)
                print("Sinfo = ", sinfo, flush=True)

                sinfo.update({
                    "sensor": sensor,
                    "office": {
                        "lat": office[0],
                        "lon": office[1],
                    },
                    "kpiId":kpis_id,
                    "botConfigId":botConfig_id,
                    "botName":algorithm,
                    "time": timestamp,
                    "path": mp4file[len(self._storage)+1:]
                })`
divdaisymuffin commented 2 years ago

@nnshah1 as you mentioned about video_base. From where we will get video_base? realbase we are getting in r object from pipeline.json, but I cant find any video_base there. And how video_base will know the start-time of the video?

nnshah1 commented 2 years ago

@nnshah1 as you mentioned about video_base. From where we will get video_base? realbase we are getting in r object from pipeline.json, but I cant find any video_base there. And how video_base will know the start-time of the video?

video_base would have to be passed in and used instead of real_base. It should be the start time of the recorded clip.

nnshah1 commented 2 years ago

int(startDate) + int(time.time() * 1000) - botStartTime

try something like:

r["time"] = int(startDate) + r["timestamp"]

The timestamp will give the relative position of the event within the stream, and then adding in the base gives you the absolute time.

nnshah1 commented 2 years ago

I would first try to only modify mqtt2b and rec2b to update the timestamps and metadata getting published and leave upload alone -

divdaisymuffin commented 2 years ago

@nnshah1 Thanks for the suggestion, we will try this and get back to you

divdaisymuffin commented 2 years ago

@nnshah1 we have tried the suggestion and made changes accordingly in mqtt2db.py and rec2db.py. But when we are doing it we are not getting the actual timestam of video instead we are getting a timestamp of very far future of around 2096. Also I ahve a query that realbase that we add to the timestamp, what actually is it? and why timestamp that we are getting in r object is why very old one of around 1945 ?

nnshah1 commented 2 years ago

the timestamp is the number of nanoseconds since the stream started recording, thus it is a relative number and not an absolute.

The absolute time is gotten by adding in the base (should be in nanoseconds since the epoch) to the timestamp. For recordings that start at a different base time - that should give the same result as you would want - i.e. using a different base captured in nanoseconds since the epoch would give you the correct timestamp.

I would double check that your adding in the base as nanoseconds and then dividing back down to seconds before interpreting the timestamp.

divdaisymuffin commented 2 years ago

@nnshah1 your suggestion helped me a alot for creating recording with the given video start timestamp. I have taken second value after splitting the filename and added that into my startTime and taken that into ns, this change is been made into rec2db.py andi got exact time in my recordings. But still I am not able to get correct time with mqtt2db.py file by doing startTime + r["timestamp"], so I am stuck with analytics index time logic.

vidyasiv commented 2 years ago

@divdaisymuffin , you had mentioned the lag in bounding boxes on the web visualization. I looked into adding watermark and comparing the results with master and changes relating to customize start time.

Pipeline with watermark:

"template":"rtspsrc udp-buffer-size=212992 name=source ! queue ! rtph264depay ! h264parse ! video/x-h264 ! tee name=t ! queue ! decodebin ! videoconvert name=\"videoconvert\" ! video/x-raw,format=BGRx ! queue leaky=upstream ! gvadetect ie-config=CPU_BIND_THREAD=NO model=\"{models[object_detection_2020R2][1][network]}\" model-proc=\"{models[object_detection_2020R2][1][proc]}\" name=\"detection\" ! gvametaconvert name=\"metaconvert\" ! queue ! gvametapublish name=\"destination\" ! gvawatermark ! videoconvert ! x264enc ! splitmuxsink max-size-time=60500000000 name=\"splitmuxsink\"",

The dark blue boxes are by gvawatermark and the cyan boxes are from analytics.js drawing bounding boxes. I noticed that inference interval is 6 by default, when watermarking/detection happens, the bounding boxes seem to line up almost perfectly. The issue is for the frames in between, the detection information from the previous watermarking appears to be carried over.

Screenshots with customize start time changes:

2001_inference6_frame_duration_180_ 049 2001_inference6_frame_duration_180_ 050

Screenshots with master:

master_inference6_frame_duration_180_  271 master_inference6_frame_duration_180_  272

This issue is independent of changes relating to customizing start time and should probably be filed as a separate Github issue. Please confirm if you're able to customize start time as demonstrated in sample in fork

nnshah1 commented 2 years ago

Closing issue as ability to set starting time has been demonstrated - issue with lagging watermarks has to do with the inference interval and is not related directly to the customized start time. Let us know if there is an issue with customizing the start time.