laxnpander / OpenREALM

OpenREALM is a pipeline for real-time aerial mapping utilizing visual SLAM and 3D reconstruction frameworks.
GNU Lesser General Public License v2.1
443 stars 114 forks source link

Changes needed if running on my own dataset #100

Closed marleyshan21 closed 5 months ago

marleyshan21 commented 6 months ago

Hi! OpenRealm works successfully with the test dataset provided. I am trying to run the same using my custom data set where the images are geotagged.

I keep facing the below error 2024-01-17 09:05:03.499 ( 2.647s) [Stage [pose_esti] pose_estimation.cpp:291 WARN| . No tracking. Screenshot from 2024-01-17 10-16-31

And on RVIZ i just observe the drone flying - its path and the images (without features).

I have changed the below files:

  1. calib.yaml in OpenREALM_ROS1_Bridge/realm_ros/profiles/alexa_noreco/camera/calib.yaml with my camera params
  2. exif.yaml - a change in the type with which the heading is being sent. Exif.GPSInfo.GPSImgDirection instead of Xmp.exif.REALM.Heading.
  3. launch file: OpenREALM_ROS1_Bridge/realm_ros/launch/alexa_noreco.launch. Changed the path to my custom dataset.

Since the features are not being tracked, I want to know if I am missing out any param in some other file. I would like to know what all files need to be changed for a custom dataset and what parameters are to be tuned? Could you help by letting me know on how to move forward @laxnpander ?

laxnpander commented 6 months ago

Hmm, this sounds about right. Are you sure your image data is suitable for visual slam and your camera calibration is correct? OpenREALM uses OpenVSLAM as backend for pose estimation, if this is not able to track features then there is no chance of creating a map.

With what frame rate was the dataset created and what's the image resolution?

marleyshan21 commented 6 months ago

The camera calibration is correct and the dataset seems similar to the test data with decent overlap and features in the env, so i guess its suitable for visual slam (how can we know that? Any pointers?). The dataset was created with a frame rate of ~1fps and the images are of size 4032 x 3040 pixels.

We are having an approx 50% forward overlap. And we are seeing bad stitches on gnss only mode too. Could overlap be a reason?

  1. Could you list maybe like the basic prereqs for a dataset so that it can run with Openrealm? (The paper talks about your dataset that had 99% frontward overlap and 50% sideways overlap. Do we actually need 99% overlap?)

  2. Do you have another dataset that we could try (or maybe a link to another aerial dataset that you have tried on)?

That's be really helpful.

laxnpander commented 6 months ago

@marleyshan21 Well, at this point I can only say from experience. There is too many variables to consider including the machine you are running it on. You are correct in your observation that FPS doesn't matter as long as overlap is fine. 50% overlap is not enough though. You have to keep in mind visual SLAM are algorithms for real-time motion estimation. Missing out on features of a single frame in your setup would result in complete tracking loss. Some day we may have algorithms to do that, but right now that is impossible given the state of the art. I'd go that far and say 50% might not even be enough for traditional photogrammetry?

The easiest answer for the required overlap is "as much as possible, as little as necessary". Flying over a city with all these unique features in every picture? You may get away with 80-90% overlap. Flying over fields and farmland where every image looks the same? Even 99% may not be enough. So hard to tell. You will have to find out what works for you, your drone and your use case.

Bad stitches in GNSS only mode are expected. After all it assumes a simple downward projection. Assuming you are using a gimbal stabilised camera, in my experience 90% of the time the heading is off when images do not align well. It's much more difficult to notice a 1m offset in x/y/z than it is to recognise a 5° wrong angle of an image. So improve heading and results may get significantly better. For example, I would always rather fix UAV heading than do the 180° turns at the end of each leg.

On a side note, 4000x3000 pixels is A LOT. If you have the machine to process it, no problem. But it sounds like a lot to me. Feature extraction might be very slow on your images resulting in skipped frames and tracking loss.

Ah, and I do have datasets, but none I can share unfortunately.

marleyshan21 commented 5 months ago

Thanks for these pointers. I agree there are a lot of factors coming into play here and anything could be causing this.

Right now, to recreate the experiments, I am planning on collecting another custom dataset that is similar to yours with the following prereqs. This is based on your pointers here and other github discussions you have had with others in this repo's community.

Dataset will have the following properties:

  1. Images collected at a constant altitude
  2. Gimbal stabilized camera
  3. Proper calibration params for the camera
  4. Geo tagged images like in DJI
  5. Atleast 95% forward overlap and 50% sideways overlap in an area that looks similar to the test dataset - with lots of distinctive features
  6. Higher frame rate (10fps if possible, to recreate the test dataset)
  7. Images of similar resolution as the test data (if possible)

Let me know if these sound minimum enough assumptions and prereqs to get started on with testing OpenRealm the right way

laxnpander commented 5 months ago

That sounds perfect! There might be some tuning of the pipeline required depending on the resolution and your hardware, but for the data itself this should be everything you need. If you have the choice, I'd recommend to go for a global shutter camera as rolling shutter will have negative effects on the matching process. But usually the hardware is quite fixed and it should work reasonable enough anyway.

marleyshan21 commented 5 months ago

Thanks!

Will close the issue now. Will reopen it if relevant issues pop up w.r.t the custom dataset