Closed gityeezy closed 3 months ago
+1
I didn't use a DJI because it is too complicated to access image data in real-time. I had a custom drone with an onboard computer, where you either run OpenREALM on the drone or send down the images to a ground station and process the map there. The custom drone had a camera with 10 fps and all of those were stored in the provided dataset. You need the high overlap for visual slam to work properly.
Hello, I would like to ask how you collected the data set? I see your test data has 3k+ pictures. Did you shoot one photo at a time, or did you shoot the video and then extract the image? If I use video to extract pictures, how do I put location information into the picture? Looking forward to your answer, I only have an ordinary DJI drone. Thank you!!!