"The first step in calibration is to get snapshot images from all cameras."
How do I get these dewarped snapshots and what are the resolutions needed?
And with QGIS, there's this line "Make a note of the longitude and latitude of the origin (in this case, the center of the building)."
Does this mean that I have to get the coordinate of the center of the referenced TIF image?
On the nvaisle/nvspot.csv files, what do ROI and the H values represent?
How do I find the gx,gy coordinates? I tried just importing the latitute and longitude but they don't work.
Hi, I'm having some issues with calibrating my snapshots to generate the perception spreadsheet. I'm following this guide: https://devblogs.nvidia.com/calibration-translate-video-data/ and the one on the deepstream sdk guide.
Thank you.