CNES / cars

CARS is a dedicated and open source 3D tool to produce Digital Surface Models from satellite imaging by photogrammetry.
https://cars.readthedocs.io/
Apache License 2.0
258 stars 30 forks source link

DSM Generation Issue with CARS and Error Message #19

Closed silverbean-j closed 1 year ago

silverbean-j commented 1 year ago

Hello Team of CARS,

I'm currently using CARS to generate a DSM image from an Unsigned 16-bit GeoTiff image. There are a total of 2 images, with one having a width of 29,066 and a height of 19,758, and the other one having a width of 30,199 and a height of 17,480. Both images are Panchromatic.

I'm encountering an error message while trying to create a DSM using these two images, and the DSM is not being generated.

Can you please advise on how to resolve this issue? Below are the error message and the contents of the configfile.json:

23-09-08 01:46:31 :: PROGRESS :: Data list to process: [ epi_matches_left ] ...
23-09-08 02:44:57 :: PROGRESS :: Data list to process: [ color , dsm ] ...
23-09-08 02:46:10 :: WARNING :: Worker tcp://127.0.0.1:33907 (pid=25) exceeded 95% memory budget. Restarting...
23-09-08 02:46:11 :: WARNING :: Restarting worker
23-09-08 02:46:13 :: WARNING :: Worker tcp://127.0.0.1:36475 (pid=23) exceeded 95% memory budget. Restarting...
23-09-08 02:46:17 :: WARNING :: Restarting worker
23-09-08 02:47:23 :: WARNING :: Worker tcp://127.0.0.1:46621 (pid=53) exceeded 95% memory budget. Restarting...
23-09-08 02:47:24 :: WARNING :: Worker tcp://127.0.0.1:43005 (pid=63) exceeded 95% memory budget. Restarting...
23-09-08 02:47:24 :: WARNING :: Restarting worker
23-09-08 02:47:24 :: WARNING :: Restarting worker
23-09-08 02:48:39 :: WARNING :: Worker tcp://127.0.0.1:44437 (pid=76) exceeded 95% memory budget. Restarting...
23-09-08 02:48:39 :: WARNING :: Restarting worker
23-09-08 02:48:43 :: WARNING :: Worker tcp://127.0.0.1:41311 (pid=72) exceeded 95% memory budget. Restarting...
23-09-08 02:48:45 :: WARNING :: Restarting worker
23-09-08 02:49:57 :: WARNING :: Worker tcp://127.0.0.1:45291 (pid=100) exceeded 95% memory budget. Restarting...
23-09-08 02:49:57 :: WARNING :: Worker tcp://127.0.0.1:33733 (pid=91) exceeded 95% memory budget. Restarting...
23-09-08 02:49:58 :: WARNING :: Restarting worker
23-09-08 02:49:58 :: WARNING :: Restarting worker
23-09-08 02:51:13 :: WARNING :: Worker tcp://127.0.0.1:33269 (pid=113) exceeded 95% memory budget. Restarting...
23-09-08 02:51:18 :: WARNING :: Worker tcp://127.0.0.1:45649 (pid=110) exceeded 95% memory budget. Restarting...
23-09-08 02:51:18 :: WARNING :: Restarting worker
23-09-08 02:51:23 :: WARNING :: Restarting worker
23-09-08 02:51:36 :: WARNING :: Restarting worker
23-09-08 02:52:39 :: WARNING :: Worker tcp://127.0.0.1:41763 (pid=148) exceeded 95% memory budget. Restarting...
23-09-08 02:52:41 :: WARNING :: Restarting worker
23-09-08 02:52:42 :: WARNING :: Worker tcp://127.0.0.1:34041 (pid=129) exceeded 95% memory budget. Restarting...
23-09-08 02:52:53 :: WARNING :: Restarting worker
23-09-08 02:52:54 :: WARNING :: Restarting worker
23-09-08 02:54:08 :: WARNING :: Worker tcp://127.0.0.1:45937 (pid=166) exceeded 95% memory budget. Restarting...
23-09-08 02:54:08 :: WARNING :: Restarting worker
23-09-08 02:54:08 :: ERROR :: CARS terminated with following error
Traceback (most recent call last):
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 334, in breakpoint
    self.compute_futures()
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 288, in compute_futures
    for future_obj in self.cluster.future_iterator(future_objects):
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/cluster/abstract_dask_cluster.py", line 221, in __next__
    fut, res = self.dask_a_c.__next__()
  File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 5391, in __next__
    return self._get_and_raise()
  File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 5380, in _get_and_raise
    raise exc.with_traceback(tb)
distributed.scheduler.KilledWorker: Attempted to run task wrapper_builder-25fe4954-1af9-423d-b27d-f042fba94b24 on 10 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:45937. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/cars/venv/lib/python3.8/site-packages/cars/cars.py", line 175, in main_cli
    used_pipeline.run()
  File "/cars/venv/lib/python3.8/site-packages/cars/pipelines/sensor_to_dense_dsm/sensor_to_dense_dsm_pipeline.py", line 878, in run
    _ = self.rasterization_application.run(
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 361, in __exit__
    self.breakpoint()
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 338, in breakpoint
    raise RuntimeError(traceback.format_exc()) from exc
RuntimeError: Traceback (most recent call last):
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 334, in breakpoint
    self.compute_futures()
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py", line 288, in compute_futures
    for future_obj in self.cluster.future_iterator(future_objects):
  File "/cars/venv/lib/python3.8/site-packages/cars/orchestrator/cluster/abstract_dask_cluster.py", line 221, in __next__
    fut, res = self.dask_a_c.__next__()
  File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 5391, in __next__
    return self._get_and_raise()
  File "/cars/venv/lib/python3.8/site-packages/distributed/client.py", line 5380, in _get_and_raise
    raise exc.with_traceback(tb)
distributed.scheduler.KilledWorker: Attempted to run task wrapper_builder-25fe4954-1af9-423d-b27d-f042fba94b24 on 10 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:45937. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.

23-09-08 02:54:14 :: WARNING :: Received heartbeat from unregistered worker 'tcp://127.0.0.1:42195'.
23-09-08 02:54:14 :: WARNING :: Received heartbeat from unregistered worker 'tcp://127.0.0.1:42195'.
{
    "inputs": {
        "sensors" : {
            "one": {
                "image": "img1.tif",
                "geomodel": "img1.geom",
                "no_data": 0
            },
            "two": {
                "image": "img2.tif",
                "geomodel": "img2.geom"
            }
        },
        "roi": "roi/double_half_roi.shp",
        "pairing": [["one", "two"]],
        "initial_elevation": "srtm_dir/n41_e129_1arc_v3.tif"
    },
    "applications": {
        "point_cloud_rasterization": {
            "method": "simple_gaussian",
            "resolution": 2.0
        }
    },
    "output": {
        "out_dir": "outresults_grid_PAN"
    },
    "orchestrator": {
        "mode": "mp",
        "nb_workers": 4
    }
}

I would greatly appreciate your help in resolving this issue. Thank you for creating such an amazing program. I'm truly grateful.

dyoussef commented 1 year ago

Hello silverbean-j,

Can you send us your used_conf.json? I would like to check the size of the disparity range. We currently have a bug on disparity range that is too large: we are in the process of fixing it and perhaps this could solve your problem.

Thanks in advance Regards, David

silverbean-j commented 1 year ago

Below is the content of the "used_conf" file. Thank you so much. If the issue gets resolved, could you please let me know?

And I have an additional question. Is it possible to crop the image for DSM processing? If it's possible to create a DSM by cropping the image, do I need to adjust the RPC file (geometry file) to match the cropped image?

{
  "pipeline": "sensors_to_dense_dsm",
  "orchestrator": {
    "mode": "local_dask",
    "profiling": {
      "activated": false,
      "mode": "time",
      "loop_testing": false
    },
    "use_memory_logger": false,
    "nb_workers": 2,
    "max_ram_per_worker": 2000,
    "walltime": "00:59:00",
    "config_name": "unknown",
    "activate_dashboard": false,
    "python": null
  },
  "inputs": {
    "sensors": {
      "one": {
        "image": "/data/img1.tif",
        "geomodel": "/data/img1.geom",
        "no_data": 0,
        "color": "/data/img1.tif",
        "geomodel_type": "RPC",
        "geomodel_filters": null,
        "mask": null,
        "classification": null
      },
      "two": {
        "image": "/data/img2.tif",
        "geomodel": "/data/img2.geom",
        "no_data": 0,
        "color": "/data/img2.tif",
        "geomodel_type": "RPC",
        "geomodel_filters": null,
        "mask": null,
        "classification": null
      }
    },
    "pairing": [
      [
        "one",
        "two"
      ]
    ],
    "initial_elevation": "/data/srtm_dir/n41_e129_1arc_v3.tif",
    "epsg": null,
    "default_alt": 0,
    "roi": null,
    "check_inputs": false,
    "use_epipolar_a_priori": false,
    "epipolar_a_priori": {
      "one_two": {
        "grid_correction": [
          [
            [
              -1.236149945798448,
              2.9037172180991558e-05
            ],
            [
              9.750586874733337e-05,
              0.0
            ]
          ],
          [
            [
              -2.6227521261886264,
              6.17161827487914e-05
            ],
            [
              0.00021826094081499556,
              0.0
            ]
          ]
        ],
        "disparity_range": [
          -1387.7531640625,
          1374.8098046875
        ]
      }
    },
    "geoid": "/cars/venv/lib/python3.8/site-packages/cars/pipelines/sensor_to_dense_dsm/../../conf/geoid/egm96.grd"
  },
  "output": {
    "out_dir": "/data/outresult_SRTM_Pan",
    "dsm_basename": "dsm.tif",
    "clr_basename": "clr.tif",
    "info_basename": "content.json"
  },
  "applications": {
    "grid_generation": {
      "method": "epipolar",
      "epi_step": 30,
      "save_grids": false,
      "geometry_loader": "OTBGeometry"
    },
    "resampling": {
      "method": "bicubic",
      "epi_tile_size": 500,
      "save_epipolar_image": false,
      "save_epipolar_color": false
    },
    "holes_detection": {
      "method": "cloud_to_bbox"
    },
    "dense_matches_filling.1": {
      "method": "plane",
      "interpolation_type": "pandora",
      "interpolation_method": "mc_cnn",
      "max_search_distance": 100,
      "smoothing_iterations": 1,
      "ignore_nodata_at_disp_mask_borders": true,
      "ignore_zero_fill_disp_mask_values": true,
      "ignore_extrema_disp_values": true,
      "nb_pix": 20,
      "percent_to_erode": 0.2,
      "classification": null,
      "save_disparity_map": false
    },
    "dense_matches_filling.2": {
      "method": "zero_padding",
      "classification": null,
      "save_disparity_map": false
    },
    "sparse_matching": {
      "method": "sift",
      "disparity_margin": 0.02,
      "elevation_delta_lower_bound": -1000,
      "elevation_delta_upper_bound": 1000,
      "epipolar_error_upper_bound": 10.0,
      "epipolar_error_maximum_bias": 0.0,
      "disparity_outliers_rejection_percent": 0.1,
      "minimum_nb_matches": 100,
      "sift_matching_threshold": 0.6,
      "sift_n_octave": 8,
      "sift_n_scale_per_octave": 3,
      "sift_peak_threshold": 20.0,
      "sift_edge_threshold": 5.0,
      "sift_magnification": 2.0,
      "sift_back_matching": true,
      "save_matches": false
    },
    "dense_matching": {
      "method": "census_sgm",
      "min_epi_tile_size": 300,
      "max_epi_tile_size": 1500,
      "epipolar_tile_margin_in_percent": 60,
      "min_elevation_offset": null,
      "max_elevation_offset": null,
      "disp_min_threshold": null,
      "disp_max_threshold": null,
      "generate_performance_map": false,
      "perf_eta_max_ambiguity": 0.99,
      "perf_eta_max_risk": 0.25,
      "perf_eta_step": 0.04,
      "perf_ambiguity_threshold": 0.6,
      "save_disparity_map": false,
      "loader": "pandora",
      "loader_conf": {
        "input": {
          "nodata_left": -9999,
          "nodata_right": -9999
        },
        "pipeline": {
          "right_disp_map": {
            "method": "accurate"
          },
          "matching_cost": {
            "matching_cost_method": "census",
            "window_size": 5,
            "subpix": 1
          },
          "optimization": {
            "optimization_method": "sgm",
            "overcounting": false,
            "penalty": {
              "P1": 8,
              "P2": 32,
              "p2_method": "constant",
              "penalty_method": "sgm_penalty"
            },
            "sgm_version": "c++",
            "min_cost_paths": false,
            "use_confidence": false
          },
          "cost_volume_confidence": {
            "confidence_method": "ambiguity",
            "eta_max": 0.7,
            "eta_step": 0.01,
            "indicator": ""
          },
          "disparity": {
            "disparity_method": "wta",
            "invalid_disparity": NaN
          },
          "refinement": {
            "refinement_method": "vfit"
          },
          "filter": {
            "filter_method": "median",
            "filter_size": 3
          },
          "validation": {
            "validation_method": "cross_checking",
            "cross_checking_threshold": 1.0
          }
        }
      }
    },
    "triangulation": {
      "method": "line_of_sight_intersection",
      "use_geoid_alt": false,
      "snap_to_img1": false,
      "add_msk_info": true,
      "geometry_loader": "OTBGeometry",
      "save_points_cloud": false
    },
    "point_cloud_fusion": {
      "method": "mapping_to_terrain_tiles",
      "save_points_cloud_as_laz": false,
      "save_points_cloud_as_csv": false
    },
    "point_cloud_outliers_removing.1": {
      "method": "small_components",
      "save_points_cloud_as_laz": false,
      "save_points_cloud_as_csv": false,
      "activated": false,
      "on_ground_margin": 11,
      "connection_distance": 3.0,
      "nb_points_threshold": 50,
      "clusters_distance_threshold": null
    },
    "point_cloud_outliers_removing.2": {
      "method": "statistical",
      "save_points_cloud_as_laz": false,
      "save_points_cloud_as_csv": false,
      "activated": false,
      "k": 50,
      "std_dev_factor": 5.0
    },
    "point_cloud_rasterization": {
      "method": "simple_gaussian",
      "resolution": 2.0,
      "dsm_radius": 1,
      "sigma": null,
      "grid_points_division_factor": null,
      "dsm_no_data": -32768,
      "color_no_data": 0,
      "color_dtype": "uint16",
      "msk_no_data": 65535,
      "save_color": true,
      "save_stats": false,
      "save_msk": false,
      "save_classif": false,
      "save_dsm": true,
      "save_confidence": false,
      "save_source_pc": false,
      "compute_all": false
    }
  }
}
dyoussef commented 1 year ago

As I suspected, the range of disparity is very wide. I'd have to check whether this is due to the bug or whether there's a problem in the data: could you please share with us the contents of the img1.geom file for example? Could you also run cars with "--loglevel INFO" and send us the logs?

To answer your question, you can produce a DSM on an extract: https://cars.readthedocs.io/en/stable/user_guide/input_image_preparation.html#make-input-roi-images. The extract information is then contained in the image: no need to modify the RPCs.

silverbean-j commented 1 year ago

I'm truly grateful for your guidance on this, and just to clarify, when you mention 'from file,' you're referring to the geom file, right? Below are the contents of the geom file and the Loglevel INFO file.

adjustment_0.adj_param_0.center:  0
adjustment_0.adj_param_0.description:  intrack_offset
adjustment_0.adj_param_0.lock_flag:  0
adjustment_0.adj_param_0.parameter:  0
adjustment_0.adj_param_0.sigma:  200
adjustment_0.adj_param_0.units:  pixel
adjustment_0.adj_param_1.center:  0
adjustment_0.adj_param_1.description:  crtrack_offset
adjustment_0.adj_param_1.lock_flag:  0
adjustment_0.adj_param_1.parameter:  0
adjustment_0.adj_param_1.sigma:  200
adjustment_0.adj_param_1.units:  pixel
adjustment_0.adj_param_2.center:  0
adjustment_0.adj_param_2.description:  intrack_scale
adjustment_0.adj_param_2.lock_flag:  0
adjustment_0.adj_param_2.parameter:  0
adjustment_0.adj_param_2.sigma:  200
adjustment_0.adj_param_2.units:  unknown
adjustment_0.adj_param_3.center:  0
adjustment_0.adj_param_3.description:  crtrack_scale
adjustment_0.adj_param_3.lock_flag:  0
adjustment_0.adj_param_3.parameter:  0
adjustment_0.adj_param_3.sigma:  200
adjustment_0.adj_param_3.units:  unknown
adjustment_0.adj_param_4.center:  0
adjustment_0.adj_param_4.description:  map_rotation
adjustment_0.adj_param_4.lock_flag:  0
adjustment_0.adj_param_4.parameter:  0
adjustment_0.adj_param_4.sigma:  0.1
adjustment_0.adj_param_4.units:  degrees
adjustment_0.description:  Initial adjustment
adjustment_0.dirty_flag:  0
adjustment_0.number_of_params:  5
bias_error:  0
ce90_absolute:  0
ce90_relative:  0
current_adjustment:  0
height_off:  194.20
height_scale:  239.69
image_id:  
lat_off:  41.78967736
lat_scale:  0.06526190
line_den_coeff_00:  +1.000000000000000e+00
line_den_coeff_01:  -1.885586056222385e-04
line_den_coeff_02:  +8.369411581526143e-04
line_den_coeff_03:  +4.432659897077446e-06
line_den_coeff_04:  -6.420996140170980e-07
line_den_coeff_05:  -8.361737416351323e-08
line_den_coeff_06:  +5.635094529618147e-08
line_den_coeff_07:  +1.714782985535211e-05
line_den_coeff_08:  -6.323872044128716e-07
line_den_coeff_09:  +9.228987630686163e-07
line_den_coeff_10:  -1.260478522369771e-07
line_den_coeff_11:  -6.836192176941573e-07
line_den_coeff_12:  -4.364713136375682e-06
line_den_coeff_13:  -6.209008363116076e-08
line_den_coeff_14:  +7.497349451835239e-07
line_den_coeff_15:  +4.672866832606874e-06
line_den_coeff_16:  -3.748047591921314e-09
line_den_coeff_17:  +3.656176973771694e-08
line_den_coeff_18:  +2.024897021121246e-07
line_den_coeff_19:  +2.814513018232602e-09
line_num_coeff_00:  +1.779856939945640e-03
line_num_coeff_01:  +2.587149428460416e-01
line_num_coeff_02:  -1.184142771701485e+00
line_num_coeff_03:  -1.584433504744561e-02
line_num_coeff_04:  +6.136124307758556e-05
line_num_coeff_05:  +4.002444936350438e-06
line_num_coeff_06:  -1.919943939942766e-05
line_num_coeff_07:  -7.687045010519261e-04
line_num_coeff_08:  -1.058689078428160e-03
line_num_coeff_09:  +5.671634307968447e-07
line_num_coeff_10:  -2.344571257315187e-07
line_num_coeff_11:  +8.019583474896513e-05
line_num_coeff_12:  +9.657358210072650e-06
line_num_coeff_13:  +1.128440741817402e-07
line_num_coeff_14:  +2.859774405428269e-05
line_num_coeff_15:  +8.660328229188953e-07
line_num_coeff_16:  -5.215826571104327e-07
line_num_coeff_17:  -9.455020132790697e-07
line_num_coeff_18:  -2.560170601544195e-08
line_num_coeff_19:  +2.647312863899553e-08
line_off:  10160.00
line_scale:  10159.50
ll_lat:  41.7253248
ll_lon:  129.6790315
long_off:  129.74115305
long_scale:  0.09526013
lr_lat:  41.7488488
lr_lon:  129.8361170
meters_per_pixel_x:  0.591
meters_per_pixel_y:  0.549
number_lines:  20320
number_of_adjustments:  1
number_samples:  24264
polynomial_format:  B
rand_error:  0
rect:  0 0 24263 20319
ref_point_hgt:  140
ref_point_lat:  41.7898107
ref_point_line:  6821.5
ref_point_lon:  129.7413882
ref_point_samp:  19999.5
samp_den_coeff_00:  +1.000000000000000e+00
samp_den_coeff_01:  +8.472727025097737e-04
samp_den_coeff_02:  +5.108742426940350e-04
samp_den_coeff_03:  -2.962748527351130e-04
samp_den_coeff_04:  +1.080944028152625e-04
samp_den_coeff_05:  -1.928670643678194e-06
samp_den_coeff_06:  +2.229311044386351e-07
samp_den_coeff_07:  +1.791392967140022e-04
samp_den_coeff_08:  -1.129372296488947e-05
samp_den_coeff_09:  -4.901486162390297e-05
samp_den_coeff_10:  +1.558272986199675e-07
samp_den_coeff_11:  +2.768333832129347e-06
samp_den_coeff_12:  +1.348866870173111e-07
samp_den_coeff_13:  +9.268735821478878e-08
samp_den_coeff_14:  +1.104585228645441e-06
samp_den_coeff_15:  -1.076811878100662e-07
samp_den_coeff_16:  -4.086712186175179e-08
samp_den_coeff_17:  +3.954403791138423e-07
samp_den_coeff_18:  +2.708663172154015e-08
samp_den_coeff_19:  +3.658835356629752e-08
samp_num_coeff_00:  -3.361120867645482e-03
samp_num_coeff_01:  +1.158653888548121e+00
samp_num_coeff_02:  +2.450567170335903e-01
samp_num_coeff_03:  -2.609199256558959e-03
samp_num_coeff_04:  +2.508804340402876e-04
samp_num_coeff_05:  +2.380113890229474e-04
samp_num_coeff_06:  +4.751287834858725e-05
samp_num_coeff_07:  +2.577251429614201e-03
samp_num_coeff_08:  +9.137478753241653e-05
samp_num_coeff_09:  -2.783659833332249e-07
samp_num_coeff_10:  +1.242826411904759e-06
samp_num_coeff_11:  -1.898385689613382e-04
samp_num_coeff_12:  -3.891109864054375e-05
samp_num_coeff_13:  -5.715155740694471e-05
samp_num_coeff_14:  -8.715254621409740e-05
samp_num_coeff_15:  -1.294215695073256e-05
samp_num_coeff_16:  -1.204905823649661e-05
samp_num_coeff_17:  +1.482306185374712e-06
samp_num_coeff_18:  +8.641776962879437e-08
samp_num_coeff_19:  +1.353784205407734e-07
samp_off:  12132.00
samp_scale:  12131.50
sensor:  
type:  ossimRpcModel
ul_lat:  41.8306749
ul_lon:  129.6463758
ur_lat:  41.8542247
ur_lon:  129.8037190
23-09-13 05:26:09 :: INFO :: The AbstractCluster local_dask will be used
23-09-13 05:26:09 :: INFO :: Save DASK global merged config for debug (1: $DASK_DIR if exists, 2: ~/.config/dask/, ... ) 
23-09-13 05:26:09 :: INFO :: Local cluster with 2 workers started
23-09-13 05:26:09 :: INFO :: Received 1 stereo pairs configurations
23-09-13 05:26:09 :: INFO :: The AbstractGeometry OTBGeometry loader will be used
23-09-13 05:26:09 :: INFO :: The AbstractGeometry OTBGeometry loader will be used
23-09-13 05:26:09 :: INFO :: Left satellite acquisition angles: Azimuth angle: 156.3°, Elevation angle: 67.8°
23-09-13 05:26:09 :: INFO :: Right satellite acquisition angles: Azimuth angle: 77.3°, Elevation angle: 54.6°
23-09-13 05:26:09 :: INFO :: Stereo satellite convergence angle from ground: 37.2°
23-09-13 05:26:09 :: INFO :: Generating epipolar rectification grid ...
23-09-13 05:26:09 :: INFO :: The AbstractGeometry OTBGeometry loader will be used
23-09-13 05:26:15 :: INFO :: Size of epipolar images: 31311x30017 pixels
23-09-13 05:26:15 :: INFO :: Disparity to altitude factor: 0.7427135589408688 m/pixel
23-09-13 05:26:15 :: INFO :: Margins added to right region for matching: [10 10 10 10]
23-09-13 05:26:15 :: INFO :: Size of epipolar image: [0, 0, 31311, 30017]
23-09-13 05:26:15 :: INFO :: Optimal tile size for epipolar regions: 500x500 pixels
23-09-13 05:26:15 :: INFO :: Epipolar image will be processed in 3843 splits
23-09-13 05:26:15 :: INFO :: Number of bands in color image: 1
23-09-13 05:26:15 :: INFO :: Number of tiles in epipolar resampling: row: 61 col: 63
23-09-13 05:26:15 :: INFO :: Number of left epipolar image tiles outside left sensor image and removed: 1741
23-09-13 05:26:15 :: INFO :: Number of right epipolar image tiles outside right sensor image and removed: 1325
23-09-13 05:26:17 :: INFO :: Compute bbox: number tiles: 3843
23-09-13 05:26:17 :: INFO :: Generate disparity: Number tiles: 3843
23-09-13 05:26:20 :: INFO :: Compute delayed ...
23-09-13 05:26:30 :: INFO :: Wait for futures results ...
23-09-13 05:26:30 :: PROGRESS :: Data list to process: [ epi_matches_left ] ...
dyoussef commented 1 year ago

"from" did mean "geom": big typing error... my mistake.

There will be important information after the line 23-09-13 05:26:30 :: PROGRESS :: Data list to process: [ epi_matches_left ] .... I'd like you to send them to me too.

The first logs look good, as do the geom files. I think that's the bug we're working on: I'll keep you posted.

In the meantime, I'll let you follow the documentation to extract a smaller image and make the digital surface model on a smaller area. The problem shouldn't arise in this case.

silverbean-j commented 1 year ago

I am currently in the process of cropping the image to a smaller size and extracting the DSM (Digital Surface Model). I am working on defining the appropriate size for the region. If I can find a suitable size without encountering any issues, I will certainly inform you.

Here is the complete log file for your reference. I apologize for the inconvenience of having to cut it and send it separately. Thank you very much for your response; I truly appreciate it!

2023-09-13 5:31:54 :: INFO :: The inputs consistency will not be checked. To enable the inputs checking, add check_inputs: TRUE to your input configuration                                  
2023-09-13 5:31:54 :: INFO :: Grid generation method not specified, default  epipolar is used                                           
2023-09-13 5:31:54 :: INFO :: The GridGeneration epipolar application will be used                                              
2023-09-13 5:31:54 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:54 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:54 :: INFO :: Resampling method not specified, default bicubic is used                                             
2023-09-13 5:31:54 :: INFO :: The Resampling bicubic application will be used                                              
2023-09-13 5:31:54 :: INFO :: Rasterisation method not specified, default cloud_to_bbox is used                                             
2023-09-13 5:31:54 :: INFO :: The HolesDetection cloud_to_bbox application will be used                                              
2023-09-13 5:31:54 :: INFO :: [The DenseMatchingFilling plane application will be used                                              
2023-09-13 5:31:54 :: INFO :: [The DenseMatchingFilling zero_padding application will be used                                              
2023-09-13 5:31:54 :: INFO :: Sparse Matching method not specified, default  sift is used                                           
2023-09-13 5:31:54 :: INFO :: The SparseMatching sift application will be used                                              
2023-09-13 5:31:54 :: INFO :: Dense Matching method not specified, default census_sgm is used                                            
2023-09-13 5:31:54 :: INFO :: The AbstractDenseMatching census_sgm application will be used                                              
2023-09-13 5:31:55 :: INFO :: Triangulation method not specified, default  line_of_sight_intersection is used                                            
2023-09-13 5:31:55 :: INFO :: The Triangulation line_of_sight_intersection application will be used                                              
2023-09-13 5:31:55 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:55 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:55 :: INFO :: Fusion method not specified, default mapping_to_terrain_tiles is used                                             
2023-09-13 5:31:55 :: INFO :: The PointCloudFusion mapping_to_terrain_tiles application will be used                                              
2023-09-13 5:31:55 :: INFO :: The PointCloudOutliersRemoving small_components application will be used                                              
2023-09-13 5:31:55 :: INFO :: The PointCloudOutliersRemoving statistical application will be used                                              
2023-09-13 5:31:55 :: INFO :: The PointCloudRasterization simple_gaussian application will be used                                              
2023-09-13 5:31:55 :: PROGRESS :: CARS pipeline is started.                                                 
2023-09-13 5:31:55 :: INFO :: The AbstractCluster local_dask will be used                                               
2023-09-13 5:31:55 :: INFO :: Save DASK global merged config for debug (1: $DASK_DIR if exists, 2:00 ~/.config/dask/, ... )                                      
2023-09-13 5:31:55 :: INFO :: Local cluster with 2 workers started                                               
2023-09-13 5:31:56 :: INFO :: Received 1 stereo pairs configurations                                                
2023-09-13 5:31:56 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:56 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 5:31:56 :: INFO :: Left satellite acquisition angles: Azimuth angle: 156.3°, Elevation angle: 67.8°                                           
2023-09-13 5:31:56 :: INFO :: Right satellite acquisition angles: Azimuth angle: 77.3°, Elevation angle: 54.6°                                           
2023-09-13 5:31:56 :: INFO :: Stereo satellite convergence angle from ground: 37.2°                                              
2023-09-13 5:31:56 :: INFO :: Generating epipolar rectification grid ...                                                
2023-09-13 5:31:56 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
Computing epipolar grids ...: 100% [**************************************************] (8s)                                                   

2023-09-13 5:32:04 :: INFO :: Size of epipolar images: 31309x30011 pixels                                               
2023-09-13 5:32:04 :: INFO :: Disparity to altitude factor: 0.742552814 m/pixel                                               
2023-09-13 5:32:04 :: INFO :: Margins added to right region for matching: [10 10 10 10]                                          
2023-09-13 5:32:04 :: INFO :: Size of epipolar image: [0, 0, 31309, 30011]                                             
2023-09-13 5:32:04 :: INFO :: Optimal tile size for epipolar regions: 500x500 pixels                                             
2023-09-13 5:32:04 :: INFO :: Epipolar image will be processed in 3843 splits                                             
2023-09-13 5:32:04 :: INFO :: Number of bands in color image: 1                                              
2023-09-13 5:32:04 :: INFO :: Number of tiles in epipolar resampling: row: 61 col: 63                                           
2023-09-13 5:32:04 :: INFO :: Number of left epipolar image tiles outside left sensor image and removed: 1740                                        
2023-09-13 5:32:04 :: INFO :: Number of right epipolar image tiles outside right sensor image and removed: 1302                                        
2023-09-13 5:32:06 :: INFO :: Compute bbox: number tiles: 3843                                                
2023-09-13 5:32:06 :: INFO :: Generate disparity: Number tiles: 3843                                                
2023-09-13 5:32:09 :: INFO :: Compute delayed ...                                                  
2023-09-13 5:32:19 :: INFO :: Wait for futures results ...                                                
2023-09-13 5:32:19 :: PROGRESS :: Data list to process: [ epi_matches_left ] ...                                             
UserWarning: Sending large graph of size 11.87 MiB.                                                  
This may cause some slowdown.                                                     
Consider scattering data ahead of time and using futures.                                                 
  warnings.warn(                                                       
2023-09-13 06:05:10,217 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.47 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:05:15,526 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.53 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:05:20,403 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.68 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:05:25,752 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.76 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:05:30,453 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.91 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:05:35,795 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.02 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:02,579 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.07 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:09,870 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.18 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:13,063 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.29 GiB -- Worker memory limit: 25.06 GiB              
 65%|████��2023-09-13T06:06:20.170930735Z                                            GiB            
2023-09-13 06:06:20,170 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.42 GiB -- Worker memory limit: 25.06               
2023-09-13 06:06:23,127 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.54 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:30,375 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.61 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:33,128 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.74 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:40,428 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.86 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:43,502 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 18.96 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:50,535 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.06 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:06:53,875 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.19 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:00,776 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.26 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:03,891 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.42 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:11,061 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.5 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:33,466 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.55 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:43,558 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.75 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:44,354 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.63 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:53,929 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.98 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:07:54,735 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.89 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:08:24,289 :: distributed.worker.memory - WARNING - gc.collect() took 29.582s. This is usually a sign that some tasks handle too many Python objects at the same time. Rechunking the work into smaller tasks might help.                       
2023-09-13 06:08:24,289 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.07 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:08:24,334 :: distributed.worker.memory - WARNING - Worker is at 79% memory usage. Resuming worker. Process memory: 20 GiB -- Worker memory limit: 25.06 GiB                                 
2023-09-13 06:08:24,357 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 20 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:08:24,392 :: distributed.worker.memory - WARNING - gc.collect() took 24.696s. This is usually a sign that some tasks handle too many Python objects at the same time. Rechunking the work into smaller tasks might help.                       
2023-09-13 06:08:24,392 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.05 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:08:24,445 :: distributed.worker.memory - WARNING - Worker is at 79% memory usage. Resuming worker. Process memory: 19.98 GiB -- Worker memory limit: 25.06 GiB                                 
2023-09-13 06:08:24,457 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.98 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:08:24,966 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.1 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:08:25,029 :: distributed.worker.memory - WARNING - Worker is at 79% memory usage. Resuming worker. Process memory: 20.02 GiB -- Worker memory limit: 25.06 GiB                                 
2023-09-13 06:08:25,225 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.07 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:08:25,272 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.1 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:08:25,435 :: distributed.worker.memory - WARNING - Worker is at 79% memory usage. Resuming worker. Process memory: 20.03 GiB -- Worker memory limit: 25.06 GiB                                 
2023-09-13 06:08:26,249 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.06 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:09:43,131 :: distributed.worker.memory - WARNING - Worker is at 23% memory usage. Resuming worker. Process memory: 5.97 GiB -- Worker memory limit: 25.06 GiB                                 
2023-09-13 06:09:44,169 :: distributed.worker.memory - WARNING - Worker is at 23% memory usage. Resuming worker. Process memory: 5.94 GiB -- Worker memory limit: 25.06 GiB                                 
 98%|█████████▊| 2562/2604 [59:43<01:07, 2023-09-13T06:35:24.717831995Z /cars/venv/lib/python3.8/site-packages/distributed/client.py:3108: UserWarning: Sending large graph of size 22.64 MiB.                                            
This may cause some slowdown.                                                     
Consider scattering data ahead of time and using futures.                                                 
  warnings.warn(                                                       
2023-09-13 06:35:49,235 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.75 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:35:51,542 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.76 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:35:59,325 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.03 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:01,626 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.04 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:09,327 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.02 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:11,628 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.04 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:19,425 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.03 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:21,727 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.04 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:29,427 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.04 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:31,730 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 19.06 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:36,825 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.14 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:36:39,525 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.45 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:36:39,626 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.09 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:36:41,826 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 23.25 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:37:51,432 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.75 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:37:51,686 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.72 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:37:53,371 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.1 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:37:53,598 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.08 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:38:23,609 :: distributed.worker - ERROR - Scheduler was unaware of this worker 'tcp://127.0.0.1:37921'. Shutting down.                                          
2023-09-13 06:39:32,483 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.76 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:39:34,933 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.19 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:40:58,925 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.79 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:41:00,775 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.14 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:42:03,796 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.74 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:42:05,645 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.06 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:43:18,592 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.69 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:43:20,440 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.04 GiB -- Worker memory limit: 25.06 GiB                                
2023-09-13 06:44:35,866 :: distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 17.68 GiB -- Worker memory limit: 25.06 GiB              
2023-09-13 06:44:37,813 :: distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker.  Process memory: 20.18 GiB -- Worker memory limit: 25.06 GiB                                
100%|█████████▉| 2603/2604 [1:01:16<00:09,  9.93s/it]23-09-13 6:33:36 :: INFO :: Close files ...                                              
100%|██████████| 2604/2604 [1:01:16<00:00,  1.41s/it]23-09-13 6:33:36 :: INFO :: Raw number of matches found: 4466 matches                                          
2023-09-13 6:33:36 :: INFO :: 2526 matches discarded             because their epipolar error is greater             than #NAME? = 10 pix               
2023-09-13 6:33:36 :: INFO :: Number of matches kept for epipolar error correction: 1940 matches                                           
2023-09-13 6:33:36 :: INFO :: Epipolar error before correction: mean = 3.094 pix., standard deviation = 2.441 pix., max = 9.979 pix.                                    
2023-09-13 6:34:55 :: INFO :: Epipolar error after correction: mean = 0 pix., standard deviation = 1.945 pix., max = 14.268 pix.                                    
2023-09-13 6:34:55 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:34:55 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:34:56 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:34:56 :: INFO :: EPSG code: 32652                                                  
2023-09-13 6:34:56 :: INFO :: Disparity range with margin: [-1387.753 pix., 1374.81 pix.] (margin = 53.126 pix.)                                         
2023-09-13 6:34:56 :: INFO :: Equivalent range in meters: [-1030.480 m, 1020.869 m] (margin = 39.449 m)                                         
2023-09-13 6:34:56 :: INFO :: Disparity range for current pair: [-1387.753 pix., 1374.81 pix.] (or [-1030.480 m., 1020.869 m.])                                       
2023-09-13 6:34:56 :: INFO :: Size of epipolar image: [0, 0, 31309, 30011]                                             
2023-09-13 6:34:56 :: INFO :: Optimal tile size for epipolar regions: 300x300 pixels                                             
2023-09-13 6:34:56 :: INFO :: Epipolar image will be processed in 10605 splits                                             
2023-09-13 6:34:56 :: INFO :: Number of bands in color image: 1                                              
2023-09-13 6:34:56 :: INFO :: Number of tiles in epipolar resampling: row: 101 col: 105                                           
2023-09-13 6:34:57 :: INFO :: Number of left epipolar image tiles outside left sensor image and removed: 4902                                        
2023-09-13 6:34:57 :: INFO :: Number of right epipolar image tiles outside right sensor image and removed: 3661                                        
2023-09-13 6:35:00 :: INFO :: Compute disparity: number tiles: 10605                                                
2023-09-13 6:35:02 :: INFO :: Disparity holes filling was not activated                                               
2023-09-13 6:35:02 :: INFO :: Disparity holes filling was not activated                                               
2023-09-13 6:35:02 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:02 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:03 :: INFO :: EPSG code: 32652                                                  
2023-09-13 6:35:03 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:03 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:06 :: INFO :: Computing images envelopes and their intersection                                               
2023-09-13 6:35:06 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:06 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:07 :: INFO :: The AbstractGeometry OTBGeometry loader will be used                                              
2023-09-13 6:35:07 :: INFO :: Terrain area covered: 634215302.1 square meters (or square degrees)                                            
2023-09-13 6:35:07 :: INFO :: Terrain bounding box : [553854.0, 569492.0] x [4619544.0, 4633890.0]                                            
2023-09-13 6:35:07 :: INFO :: Total terrain bounding box : [553854.0, 569492.0] x [4619544.0, 4633890.0]                                           
2023-09-13 6:35:07 :: INFO :: Optimal terrain tile size: 123x123 pixels                                               
2023-09-13 6:35:07 :: INFO :: Number of tiles in cloud fusion :row : 64 col : 59                                         
2023-09-13 6:35:07 :: INFO :: Point clouds: Merged points number: 3776                                               
2023-09-13 6:35:11 :: INFO :: Submitting 3776 tasks to dask                                                
2023-09-13 6:35:11 :: INFO :: Number of epipolar tiles for each terrain tile (counter): [(14, 1), (16, 4), (18, 13), (21, 1), (24, 14), (27, 8), (28, 2), (30, 10), (32, 7), (35, 1), (36, 16), (40, 10), (42, 15), (45, 8), (48, 29), (49, 4), (50, 11), (54, 595), (56, 23), (60, 335), (63, 1542), (70, 794)]
2023-09-13 6:35:11 :: INFO :: Average number of epipolar tiles for each terrain tile: 62                                           
2023-09-13 6:35:11 :: INFO :: Max number of epipolar tiles for each terrain tile: 70                                           
2023-09-13 6:35:11 :: INFO :: DSM output image size: 7819x7173 pixels                                               
2023-09-13 6:35:11 :: INFO :: Number of tiles in cloud rasterization: row: 59 col: 64                                           
2023-09-13 6:35:12 :: INFO :: Compute delayed ...                                                  
2023-09-13 6:35:24 :: INFO :: Wait for futures results ...                                                
2023-09-13 6:35:24 :: PROGRESS :: Data list to process: [ color , dsm ] ...                                           
  0%|          | 0/3443 [00:00<?, ?it/s]23-09-13 6:36:39 :: WARNING :: Worker tcp://127.0.0.1:35911 (pid=31) exceeded 95% memory budget. Restarting...                              
2023-09-13 6:36:40 :: WARNING :: Restarting worker                                                   
2023-09-13 6:36:43 :: WARNING :: Worker tcp://127.0.0.1:36703 (pid=33) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:36:46 :: WARNING :: Restarting worker                                                   
2023-09-13 6:37:58 :: WARNING :: Worker tcp://127.0.0.1:42701 (pid=96) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:37:58 :: WARNING :: Worker tcp://127.0.0.1:36889 (pid=106) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:38:03 :: WARNING :: Restarting worker                                                   
2023-09-13 6:38:03 :: WARNING :: Restarting worker                                                   
2023-09-13 6:38:23 :: WARNING :: Received heartbeat from unregistered worker 'tcp://127.0.0.1:37921'.                                               
2023-09-13 6:38:23 :: WARNING :: Received heartbeat from unregistered worker 'tcp://127.0.0.1:37921'.                                               
2023-09-13 6:39:36 :: WARNING :: Worker tcp://127.0.0.1:41723 (pid=118) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:39:37 :: WARNING :: Restarting worker                                                   
2023-09-13 6:39:46 :: WARNING :: Restarting worker                                                   
2023-09-13 6:39:54 :: WARNING :: Restarting worker                                                   
2023-09-13 6:41:03 :: WARNING :: Worker tcp://127.0.0.1:45645 (pid=159) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:41:03 :: WARNING :: Restarting worker                                                   
2023-09-13 6:42:08 :: WARNING :: Worker tcp://127.0.0.1:42299 (pid=169) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:42:08 :: WARNING :: Restarting worker                                                   
2023-09-13 6:42:17 :: WARNING :: Restarting worker                                                   
2023-09-13 6:43:22 :: WARNING :: Worker tcp://127.0.0.1:46415 (pid=189) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:43:23 :: WARNING :: Restarting worker                                                   
2023-09-13 6:43:32 :: WARNING :: Restarting worker                                                   
2023-09-13 6:44:40 :: WARNING :: Worker tcp://127.0.0.1:43049 (pid=209) exceeded 95% memory budget. Restarting...                                             
2023-09-13 6:44:41 :: ERROR :: CARS terminated with following error                                                
Traceback (most recent call last):                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 334, in breakpoint                                                  
    self.compute_futures()                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 288, in compute_futures                                                  
    for future_obj in self.cluster.future_iterator(future_objects):                                                  
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/cluster/abstract_dask_cluster.py, line 221, in __next__                                                  
    fut, res = self.dask_a_c.__next__()                                                  
  File /cars/venv/lib/python3.8/site-packages/distributed/client.py, line 5391, in __next__                                                  
    return self._get_and_raise()                                                    
  File /cars/venv/lib/python3.8/site-packages/distributed/client.py, line 5380, in _get_and_raise                                                  
    raise exc.with_traceback(tb)                                                    
distributed.scheduler.KilledWorker: Attempted to run task wrapper_builder-40ab3cfa-6840-4523-a7a4-0483ef401f3d on 10 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:43049. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.          
3:44:41 PM                                                        
The above exception was the direct cause of the following exception:                                               
3:44:41 PM                                                        
Traceback (most recent call last):                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/cars.py, line 175, in main_cli                                                  
    used_pipeline.run()                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/pipelines/sensor_to_dense_dsm/sensor_to_dense_dsm_pipeline.py, line 878, in run                                                  
    _ = self.rasterization_application.run(                                                   
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 361, in __exit__                                                  
    self.breakpoint()                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 338, in breakpoint                                                  
    raise RuntimeError(traceback.format_exc()) from exc                                                  
RuntimeError: Traceback (most recent call last):                                                    
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 334, in breakpoint                                                  
    self.compute_futures()                                                     
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/orchestrator.py, line 288, in compute_futures                                                  
    for future_obj in self.cluster.future_iterator(future_objects):                                                  
  File /cars/venv/lib/python3.8/site-packages/cars/orchestrator/cluster/abstract_dask_cluster.py, line 221, in __next__                                                  
    fut, res = self.dask_a_c.__next__()                                                  
  File /cars/venv/lib/python3.8/site-packages/distributed/client.py, line 5391, in __next__                                                  
    return self._get_and_raise()                                                    
  File /cars/venv/lib/python3.8/site-packages/distributed/client.py, line 5380, in _get_and_raise                                                  
    raise exc.with_traceback(tb)                                                    
distributed.scheduler.KilledWorker: Attempted to run task wrapper_builder-40ab3cfa-6840-4523-a7a4-0483ef401f3d on 10 different workers, but all those workers died while running it. The last worker that attempt to run the task was tcp://127.0.0.1:43049. Inspecting worker logs is often a good next step to diagnose what went wrong. For more information see https://distributed.dask.org/en/stable/killed.html.          
3:44:41 PM                                                        
2023-09-13 6:44:41 :: WARNING :: Restarting worker                                                   
  0%|          | 0/3443 [09:18<?, ?it/s]
dyoussef commented 1 year ago

Hello,

The number of matches is very low. You can lower the sift peak threshold due to the lack of homologous points found (the default setting may not work due to data acquisition conditions for example).

Also the imprecision of geometric models can introduce a large epipolar bias. Please check it with –loglevel INFO to see these logs:

XX-XX-XX XX:XX:XX :: INFO :: Epipolar error before correction: mean = XX.XXX pix., standard deviation = X.XXX pix., max = XX.XXX pix.

So you can modify the corresponding parameter : "epipolar_error_maximum_bias".

For example, here are the lines that could be added to the configuration:

        "sparse_matching": {
            "method": "sift",
            "sift_peak_threshold": 1.0,
            "epipolar_error_maximum_bias": 30.0
       }

I'd also recommend setting an initial elevation like the SRTM at 90m to reduce the disparity range (knowing the latitude and the longitude of your area, this code may help you to download the SRTM tile):

import numpy as np

def get_srtm_tif_name(lat, lon):
    """Download srtm tiles"""
    # longitude: [1, 72] == [-180, +180]
    tlon = (1+np.floor((lon+180)/5)) % 72
    tlon = 72 if tlon == 0 else tlon

    # latitude: [1, 24] == [60, -60]
    tlat = 1+np.floor((60-lat)/5)
    tlat = 24 if tlat == 25 else tlat

    srtm = "https://srtm.csi.cgiar.org/wp-content/uploads/files/srtm_5x5/TIFF/srtm_%02d_%02d.tif" % (tlon, tlat)
    return srtm

if __name__ == "__main__":
    print("Get SRTM tile corresponding to latitude and longitude couple")
    while 1:
        print(">> Latitude? ", end="")
        lat = input()
        print(">> Longitude? ", end="")
        lon = input()
        print(">> SRTM filename:", get_srtm_tif_name(int(lat), int(lon)))
        input()

Regards, David

dyoussef commented 1 year ago

I think we've found the solution to your problem, so I suggest you close this issue: feel free to open a new one or re-open this one if you have any further questions.

Regards, David