OpenDroneMap / ODM

A command line toolkit to generate maps, point clouds, 3D models and DEMs from drone, balloon or kite images. 📷
https://opendronemap.org
GNU Affero General Public License v3.0
4.83k stars 1.1k forks source link

3D model of the building area is poor when with boundary constraints #1769

Open meiyanzhao opened 3 months ago

meiyanzhao commented 3 months ago

How did you install ODM? (Docker, installer, natively, ...)?

install doker: docker run -dp 8888:3000 --gpus all  --name nodeodmgpu01 opendronemap/nodeodm:gpu

in python: n = Node(ip, port) boundary ={"type":"FeatureCollection","crs":{"type":"name","properties":{"name":"EPSG:4326"}},"features":[{"type":"Feature","id":0,"geometry":{"type":"Polygon","coordinates":[[[104.15728688732897,36.54148301442387],[ 104.15818386146158,36.54147661480294],[104.15818095611374,36.5411700324329],[104.15725376030238,36.54113720094068]]]}}]} boundarystr = json.dumps(boundary,ensure_ascii=False) task = n.create_task(images_name, {"3d-tiles": True,"boundary":boundarystr,"no-gpu":False})

logs:

log.json

task_output.txt

images: DJI_20240213085228_0001_WIDE DJI_20240213085242_0003_WIDE DJI_20240213085246_0004_WIDE DJI_20240213085250_0005_WIDE DJI_20240213085254_0006_WIDE DJI_20240213085258_0007_WIDE DJI_20240213085302_0008_WIDE DJI_20240213085304_0009_WIDE DJI_20240213085308_0010_WIDE DJI_20240213085312_0011_WIDE DJI_20240213085316_0012_WIDE DJI_20240213085320_0013_WIDE DJI_20240213085324_0014_WIDE DJI_20240213085328_0015_WIDE DJI_20240213085332_0016_WIDE DJI_20240213085336_0017_WIDE DJI_20240213085340_0018_WIDE DJI_20240213085344_0019_WIDE DJI_20240213085348_0020_WIDE DJI_20240213085352_0021_WIDE DJI_20240213085356_0022_WIDE DJI_20240213085400_0023_WIDE DJI_20240213085406_0024_WIDE DJI_20240213085238_0002_WIDE [Type answer here]

What is the problem?

When dealing with oblique photography models of the building area with boundary constraints, the output model effect is very poor, the model is very rough, with many small holes, texture mapping errors at the boundaries, and is messy(as shown in Figure 1111). But without boundary constraints(if this parameter is not set:“boundary”:boundarystr), the effect of the central region of the model is still good(as shown in Figure 2222). 1111 2222

[Type answer here]

What should be the expected behavior? If this is a feature request, please describe in detail the changes you think should be made to the code, citing files and lines where changes should be made, if possible.

I have checked the point cloud file and the data is quite good. However, there are loopholes and rough textures in the OBJ model results of the cropped area, which I believe may be a bug. the model with all region processing showed high point cloud density and good texture effect; The model using boundary restricted area processing has low point cloud density, sparse point clouds, and poor texture effects. Analyzing the reason, it should be that when processing the 3D model, the set boundaries were used to restrict the reconstruction area. However, in reality, the reconstruction area should not be restricted. I think when dealing with oblique photography models with boundary constraints, the logic should be to first use all photos to reconstruct the entire area, and then output the model results for the range within the boundary area. But in reality, this is not the logic. When outputting the model, only part of the photos were used (according to the results, the texture at the boundary of the cropped model was based on the surrounding image texture, not the original overall texture position, resulting in a disordered texture), and the result was relatively rough without post-processing such as smoothing and hole filling. So it is necessary to optimize the development logic and perform post-processing on the cropping results.

[Type answer here]

How can we reproduce this? What steps did you do to trigger the problem? If this is an issue with processing a dataset, YOU MUST include a copy of your dataset AND task output log, uploaded on Google Drive or Dropbox (otherwise we cannot reproduce this).

  1. When using the boundary parameter to process oblique photography models of building areas, please confirm whether all photos were first used for model reconstruction, and then the boundary was used to crop the reconstruction results.
  2. If so, check the odd_filterpoints process and see why using boundary results in sparse point cloud results, rough texture results, low accuracy, holes, and incorrect texture mapping around the model (using photos around the region boundary instead of actual photos at the local location).
  3. If there are no issues with the above, please post-process the cropped 3D results: smooth, fill in holes, improve accuracy, crop excess models around, and make the trimmed edges neat.

[Type answer here]

meiyanzhao commented 2 months ago

When can we solve this problem?

pierotofy commented 2 months ago
meiyanzhao commented 2 months ago

Because oblique photography modeling is an important way to model urban architecture, and in this era of rapid development of smart cities, this feature has a huge impact on everyone. Of course, I know that this requires the team to spend a lot of time and effort on it. As a beneficial member, I am very willing to join the R&D team or participate in testing if necessary. If there is any research and development progress, please let me know in a timely manner. Thank you.

originlake commented 2 months ago

I can understand the artifacts at the edge of the model due to boundary constraints, but the artifacts in the middle of model are strange, constrain boundary shouldn't affect the mesh quality in the middle.

@pierotofy For the artifacts at the edge, it could be helpful to use btype 1 in poisson reconstruction(I find it sometime better in interpreting missing area, it will form a plane instead of a pit), or may need to have a filter to remove the downside mesh faces by points density etc. I started to run some experiments with btype 1 recently, hopefully I can draw some conclusions.

meiyanzhao commented 1 month ago

I am currently wondering if there is a logical issue to be addressed. Should we first perform mesh reconstruction, texture mapping, and georeferencing, and then crop the model results based on the boundaries, instead of first filtering the point cloud through the boundaries and then performing mesh reconstruction, texture mapping, and georeferencing。