OpenDroneMap / ODM

A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷
https://opendronemap.org
GNU Affero General Public License v3.0
4.67k stars 1.07k forks source link

Undistort stage drops multi-track reconstructions #1076

Closed linusmartensson closed 2 years ago

linusmartensson commented 4 years ago

How did you install OpenDroneMap? (Docker, natively, ...)?

Docker

What's your browser and operating system? (Copy/paste the output of https://www.whatismybrowser.com/)

Chrome 79 on Linux

What is the problem?

When running the undistort and following steps, data is missing on multi-reconstruction datasets.

If OpenSfM fails to create a single reconstruction, it automatically splits a dataset into multiple individual reconstructions which need to be managed individually using the "--reconstruction-index" parameter in the 'undistort' step.

If this parameter is not supplied, OpenSfM defaults to 0, and undistorts only the first reconstruction. This is readily apparent if you pick up any dataset with a partial reconstruction result in the iterative reconstruction:

2020-02-28 12:03:21,709 INFO: Reconstruction 0: 25 images, 8340 points
2020-02-28 12:03:21,709 INFO: Reconstruction 1: 25 images, 6582 points
2020-02-28 12:03:21,709 INFO: Reconstruction 2: 21 images, 4059 points
2020-02-28 12:03:21,709 INFO: Reconstruction 3: 14 images, 4888 points
2020-02-28 12:03:21,709 INFO: Reconstruction 4: 10 images, 3338 points
2020-02-28 12:03:21,709 INFO: Reconstruction 5: 8 images, 1121 points
2020-02-28 12:03:21,709 INFO: 6 partial reconstructions in total.
[INFO]    Aligning submodels...

Following this, pass through the undistortion stage, and compare reconstruction.json to undistorted/reconstruction.json, images to the opensfm/undistorted/images folder.

You'll see that for a result like above, the undistorted folder will have the same number of images as in the first reconstruction path, rather than the number of images in the submodel or project.

What should be the expected behavior? If this is a feature request, please describe in detail the changes you think should be made to the code, citing files and lines where changes should be made, if possible.

I was expecting that the undistortion stage be run for each partial reconstruction, then aligned and merged. Given the nature of these datasets, it may be possible to treat them similar to submodels and merge them in a similar manner?

A suitable first step towards this goal may be to add a flag --experimental-merge-partial-reconstructions which enables the undistortion stage and dense point cloud extraction to run for all reconstructions (https://github.com/mapillary/OpenSfM/blob/15d66bc284642b54ba7e2096923656efe12d823e/opensfm/commands/undistort.py#L32 + https://github.com/mapillary/OpenSfM/blob/15d66bc284642b54ba7e2096923656efe12d823e/opensfm/commands/compute_depthmaps.py#L18 = ?), then analyze how this might fit with meshing and merging. If the models are suitably aligned, we could merge them directly in odm_filterpoints, otherwise a partial-reconstruction merge stage might be necessary to align them. Then, ideally, meshing and onwards could remain the same.

How can we reproduce this? (What steps did you do to trigger the problem? What parameters are you using for processing? If possible please include a copy of your dataset uploaded on Google Drive or Dropbox. Be detailed)

Find any more complex dataset, or attempt a faster reconstruction by lowering --min-num-features until the reconstruction stage results in multiple partial reconstructions.

I'd supply a dataset, but the one I have isn't suitable for release. :(

(I should note that the dataset I've used is generated with --min-num-features 1000, and has issues that make it unsuitable for lower feature detection thresholds. A high feature count actually broke the reconstruction stage completely and created an unusable model through incorrect track matches. The lower feature count instead results in these partial reconstructions that look fine but are lost in the undistortion stage)

pierotofy commented 4 years ago

Very good point, yes currently only the first reconstruction is processed. Would be happy to merge contributions that handle alignment + merging of multiple partial reconstructions.

linusmartensson commented 4 years ago

It's tempting, but the biggest question for me is the alignment.

Since the reconstruction has failed to complete the a unified camera graph, we're in a situation with disjoint graphs, where the only possible alignment is GPS/GCP. Maybe this is acceptable as a first solution, even though we'll have no overlap between tracks in each partial reconstruction? Thoughts?

Looking a bit further, it seems like when using split/merge, the align_submodel stage in OpenSfM processes the entire reconstruction though, so that might account for some of the alignment problems. Though I'm a bit surprised it's just a rigid transformation. Ah well.

linusmartensson commented 4 years ago

@pierotofy How about this:

In opensfm stage, after alignment:
if multiple partial reconstructions:
  generate submodels for each partial reconstruction
  copy reconstruction file with single partial index in corresponding submodel
  mark the pipeline finished in each submodel up to and including alignment_done.txt
  create file force_splitmerge.txt in the current model directory 
  rerun odm to trigger the split stage.
  exit
In split stage:
  outputs['large'] = len(photos) > args.split || file_exists("force_splitmerge.txt")

This way, we'd reuse the split/merge setup, running one for each partial reconstruction recursively using the already finished alignment, and avoiding any surrounding problems with rebuilding the pipeline to handle several undistortion directories and whatnot.

What do you think?

pierotofy commented 4 years ago

Ah, that's a clever way to do it (reuse the split-merge workflow and apply it to partial reconstructions). I think it would work! A few things I would consider:

linusmartensson commented 4 years ago

I agree with 1 and 3, we don't want to break existing setups.

If we want to avoid the files in 2 and avoid breaking rerun logic, maybe just integrating it in the current stage is easier?

if(partial_logic):
  Setup submodels from partials
  mds = metadataset.MetaDataSet(tree.opensfm)
  submodel_paths = [os.path.abspath(p) for p in mds.get_submodel_paths()]
  for sp in submodel_paths:
          sp_octx = OSFMContext(sp)
          argv = get_submodel_argv(args.name, tree.submodels_path, sp_octx.name())
          system.run(" ".join(map(quote, argv)), env_vars=os.environ.copy())
  self.next_stage = ODMMergeStage('partial-merge', args, progress=100.0)
  return

I haven't worked too much with --rerun-from. Could you shed some light on potential gotchas?

pierotofy commented 4 years ago

That could work.

The biggest thing to watch out for during re-runs is the structure of files and folders, e.g. if you create a force_splitmerge.txt file you need to then ignore it or delete if if a rerun is happening (each stage has a rerun() method). This requires some care with submodels, since if you rerun the pipeline, but then forget to cleanup the submodels folders, you might not rerun the submodels. Hope it makes sense. Not difficult to handle, but just something to be mindful of.

linusmartensson commented 4 years ago

Just wanted to throw an update on this. I've been away from work recently due to health issues, so I haven't had a chance to pick up this task. If anyone else feels up to it, feel free to go ahead. I don't think it should be too large or complicated a change since the alignment is already done correctly in OpenSfM. :)

pierotofy commented 4 years ago

No work has been done on this so far; personally I won't be able to tackle this out in the short term due to other work priorities.

linusmartensson commented 4 years ago

So, I managed to take a look at this today after all, and threw together a simplified proof of concept building on the basis that we're already running splitmerge:

import json
import os
import shutil

def _submodel_path(i, template):
    return os.path.join(".", template % i)

def get_submodel_paths():
    submodel_paths = []
    template = "submodels/submodel_%04d/opensfm"
    for i in range(999999):
        submodel_path = _submodel_path(i, template)
        if os.path.isdir(submodel_path):
            submodel_paths.append(submodel_path)
        else:
            break
    return submodel_paths

submodels = get_submodel_paths()
def mkdirs(p):
    try:
        os.makedirs(p)
    except:
        pass

def symlink(a, b):
    print(a, b)
    os.symlink(os.path.abspath(os.path.join(".",a)), b)

i = 0
for s in submodels:
    template = "aligned_submodels/submodel_%04d"
    with open(s+"/reconstruction.json", "r") as f:
        j = json.load(f)
        print(s)
        for k in range(0, len(j)):
            v = j[k]
            path = _submodel_path(i, template)

            #Create the submodel path up to opensfm
            mkdirs(path+"/opensfm")

            print(path + "/images")
            print(path + "/images")

            #symlink images
            symlink("images", path+"/images")

            #symlink exifs, features & matches
            symlink("opensfm/exif", path+"/opensfm/exif")
            symlink("opensfm/features", path+"/opensfm/features")
            symlink("opensfm/matches", path+"/opensfm/matches")

            symlink("opensfm/reference_lla.json", path+"/opensfm/reference_lla.json")

            #copy config.yaml & camera_models.json
            shutil.copy("opensfm/config.yaml", path+"/opensfm/config.yaml")

            shutil.copy("opensfm/camera_models.json", path+"/opensfm/camera_models.json")
            shutil.copy("opensfm/camera_models.json", path+"/cameras.json")
            shutil.copy(s+"/../images.json", path+"/images.json")

            #Create new reconstruction file
            with open(path+"/opensfm/reconstruction.json", "w") as o:
                json.dump([v], o)

            #Create image lists
            with open(path+"/opensfm/image_list.txt", "w") as o:
                o.writelines(v["shots"].keys())
            with open(path+"/img_list.txt", "w") as o:
                o.writelines(v["shots"].keys())

            i+=1
os.rename("submodels", "unaligned_submodels")
os.rename("aligned_submodels", "submodels")

The idea is to hook in the above script just after the alignment of submodels, rebuilding the split into one where each submodel has a single reconstruction track.

Based on that integration point, we'll only be able to handle multi-track data for split datasets, but the basis of joining a multi-track reconstruction with some level of quality is an overlap for each track in the alignment phase - which only runs on split datasets anyways, so maybe that's not a big issue?

There are some todos, since not all files are rebuilt and the script has barely been tested. For example: images.json is just copied, rather than having the correct images extracted for each submodel. I'm not sure if those datasets are necessary after the split though.

Then of course, it should be integrated into ODM rather than standalone. But it's getting there!

linusmartensson commented 4 years ago

Right. The next step for me will be integration. I've ironed out what issues I could find in the PoC, and the submodel rebuild seem to be working, having passed through several submodels without worry. As mentioned in the issue above, I'm reducing depthmap_min_consistent_views dynamically if a submodel has a track shorter than this value, to make sure data isn't lost. This will increase noise on those small tracks, but doesn't seem to be cause for any major concern. Posting the updated source here for brevity:

import json 
import os
import shutil
import yaml

def _submodel_path(i, template):
    return os.path.join(".", template % i)

def get_submodel_paths():
    submodel_paths = []
    template = "submodels/submodel_%04d/opensfm"
    for i in range(999999):
        submodel_path = _submodel_path(i, template)
        if os.path.isdir(submodel_path):
            submodel_paths.append(submodel_path)
        else:
            break
    return submodel_paths

submodels = get_submodel_paths()
def mkdirs(p):
    try:
        os.makedirs(p)
    except:
        pass

def symlink(a, b):
    print(a, b)
    os.symlink(os.path.join(".",a), b)

i = 0
for s in submodels:
    template = "aligned_submodels/submodel_%04d"
    with open(s+"/reconstruction.json", "r") as f:
        j = json.load(f)
        print(s)
        for k in range(0, len(j)):
            v = j[k]
            path = _submodel_path(i, template)

            #Create the submodel path up to opensfm
            mkdirs(path+"/opensfm")

            print(path + "/images")
            print(path + "/images")

            #symlinks for common data
            symlink("../../images", path+"/images")
            symlink("../../../opensfm/exif", path+"/opensfm/exif")
            symlink("../../../opensfm/features", path+"/opensfm/features")
            symlink("../../../opensfm/matches", path+"/opensfm/matches")
            symlink("../../../opensfm/reference_lla.json", path+"/opensfm/reference_lla.json")
            symlink("../../../opensfm/camera_models.json", path+"/opensfm/camera_models.json")

            #copy config, calibration data & image.json
            #shutil.copy("opensfm/config.yaml", path+"/opensfm/config.yaml")

            shutil.copy(s+"/../cameras.json", path+"/cameras.json")

            shutil.copy(s+"/../images.json", path+"/images.json")

            with open("opensfm/config.yaml") as f:
                doc = yaml.safe_load(f)

            dmcv = "depthmap_min_consistent_views"
            if dmcv in doc:
                if len(v["shots"]) < doc[dmcv]:
                    doc[dmcv] = len(v["shots"])
                    print("WARNING: Reduced "+dmcv+" to accomodate short track")

            with open(path+"/opensfm/config.yaml", "w") as f:
                yaml.dump(doc, f)

            #We need the original tracks file for the visualsfm export, since 
            #there may still be point matches between the tracks
            shutil.copy(s+"/tracks.csv", path+"/opensfm/tracks.csv")

            #Create our new reconstruction file with only the relevant track
            with open(path+"/opensfm/reconstruction.json", "w") as o:
                json.dump([v], o)

            #Create image lists
            with open(path+"/opensfm/image_list.txt", "w") as o:
                o.writelines(map(lambda x: "../images/"+x+'\n', v["shots"].keys()))
            with open(path+"/img_list.txt", "w") as o:
                o.writelines(map(lambda x: x+'\n', v["shots"].keys()))

            i+=1
os.rename("submodels", "unaligned_submodels")
os.rename("aligned_submodels", "submodels")
pierotofy commented 2 years ago

We now merge partial reconstructions, so this should be fixed?