tudelft3d / SUMS-Semantic-Urban-Mesh-Segmentation-public

SUMS: Semantic Urban Mesh Segmentation.
GNU General Public License v3.0
56 stars 14 forks source link

Cannot get the results from the competing method #20

Closed HanyuanYU closed 1 month ago

HanyuanYU commented 2 months ago

Hi, Doctor Gao:

When I run the competing method as per your instructions in https://github.com/tudelft3d/SUMS-Semantic-Urban-Mesh-Segmentation-public/issues/2#issuecomment-1040175852. After generating generate sampled point clouds, making operating_mode = Train_and_Test_config, an error is reported: 屏幕截图 2024-09-13 113432

My config file as follows:: //** Operation mode **// //--- all modes: Pipeline, Mesh_feature_extraction, Train_config, Test_config, Train_and_Test_config, Data_evaluation_for_all_tiles_config, Save_mesh_features_for_visualization, Class_statistics, Generate_semantic_sampled_points, Moha_Verdi_SOTA_pipeline, Evaluation_SOTA ---//

operating_mode = Train_and_Test_config

//** Common parameters **// //--- path of data ---// root_path = D:/program/SUMS-Semantic-Urban-Mesh-Segmentation-public-main/data/

//--- path of 'seg_aug.py' ---// seg_aug_py_path = D:/program/SUMS-Semantic-Urban-Mesh-Segmentation-public-main/SUMS_Win10_1.3.1/

//--- 0: RF(SUM Paper), 1: SOTA (Competition methods) ---// processing_mode = 1

//--- Select data to process, make sure your data is in the corresponding folder ---// process_train = true process_test = true process_predict = true process_validate = true

//--- Use binary or ascii output ---// use_binary = false

//--- Label definition name ---// label_definition = label

//--- Class name, in *.ply file, make sure that unlabelled is (-1), unclassified is (0), other classes start from (1) ---// labels_name = terrain,high_vegetation,building,water,car,boat

//--- Class color is normalized within [0, 1] ---// labels_color = 170,85,0 ; 0,255,0 ; 255,255,0 ; 0,255,255 ; 255,0,255 ; 0,0,153

//--- labels are ignored in classification, default has no ignored labels --- ignored_labels_name = default

//** Mesh_feature_extraction **// //--- The mesh should have texture, otherwise the classification will not perform very well ---// with_texture = true

//--- Select intermediate data to save ---// save_sampled_pointclouds = true save_oversegmentation_mesh = true save_tex_cloud = false

//--- For compute relative elevation (meters) ---// multi_scale_ele_radius = 10.0,20.0,40.0 long_range_radius_default = default local_ground_segs = default

//--- For generate HSV histogram ---// hsv_bins = 15,5,5

//--- Interior medial axis transform parameters ---// mat_delta_convergance = default mat_initialized_radius = default mat_denoising_seperation_angle = default mat_iteration_limit_number = default

//--- For custom or existing segments, make sure you have the field face_segment_id ---// use_existing_mesh_segments_on_training = false use_existing_mesh_segments_on_testing = false use_existing_mesh_segments_on_predicting = false use_existing_mesh_segments_on_validation = false

//--- settings for texture color processing --- // //If it is set to true then it consumes a lot of memory, if not then we use face average color which save memories but it has less accurate color feature //recommendation: adaptive triangles: true, dense triangles: false; use_face_pixels_color_aggregation_on_training = true use_face_pixels_color_aggregation_on_testing = true use_face_pixels_color_aggregation_on_predicting = true use_face_pixels_color_aggregation_on_validation = true

//--- Region growing for mesh over-segmentation ---// mesh_distance_to_plane = 0.5 mesh_accepted_angle = 90.0 mesh_minimum_region_size = 0.0 partition_folder_path = segments/ partition_prefixs = _mesh_seg

//--- Feature selection, check the end of this file for feature dictionary; the multiscale elevation (use_mulsc_eles) corresponding to (multi_scale_ele_radius) ---// use_feas = 0,1,2,3 use_mulsc_eles = 0,1,2 use_basic_features = 0,1,2,3,4 use_eigen_features = 3,4,6,11 use_color_features = 3,4,5,6,10,11,12,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38

//--- Used sampled point clouds instead of triangle faces (only for merged mesh that has topological interstice); In order to compute relative elevation faster, a sparse point cloud will be sampled by default, these settings are for all datasets (train, test, predict, validate) ---// is_pointclouds_exist = false sampling_point_density = 1.0 ele_sampling_point_density = default

//---- Use which strategy to sample point clouds on mesh ---// //0: sampled only; 1: face center only; 2: face center plus face vertices; 3: sampled and face center; 4: sampled and face center plus face vertices. //0: for relative elevation (automatic); 1,2: for mesh feature computation; 3,4: for merged mesh feature computation; 3: for deep learning sampled point clouds.
sampling_strategy_training = 3 sampling_strategy_testing = 3 sampling_strategy_predicting = 2 sampling_strategy_validation = 3

//** Train_and_Test parameters **// //--- Random forest parameters ---// rf_tree_numbers = 200 rf_tree_depth = 50

//--- smote data augmentation, it will call python file, make sure you have installed 'imbalanced-learn' and replaced 'filter.py' in 'Lib\site-packages\imblearn\over_sampling_smote' with ours ---// augmented_data_exist = true enable_augment = false used_k_neighbors = 15

//--- save intermediate data when run the 'Test' ---// save_feature_importance = true save_error_map = true

//** Class_statistics **// //--- 0: mesh, 1: sampled point cloud ---// input_type_for_statistics = 1

//** SOTA: Moha_Verdi_SOTA_pipeline **// //--- Mesh neighborhood radius, for initial planarity to generate region growing seeds ---// short_range_radius_default = default

//--- Mesh neighborhood radius, for face color ---// mr_facet_neg_sphericial_radius = default

//--- Mesh region growing parameters ---// mr_limit_distance = default mr_angle_thres = default mr_L1_color_dis = default mr_max_sp_area = default

//--- MRF formulation, regularization parameter is (mrf_lambda_mh) ---// mrf_lambda_mh = 0.5 mrf_energy_amplify = default

//** SOTA : all deep learning methods **// //--- add point color for sampled point cloud, tune 'sampling_point_density' for sampling density ---// add_point_color_for_dp_input = true

//--- path for save the data ---// //--- "spg/" | "kpconv/" | "randlanet/" | "pointnet2/" | "pointnet/" ---// //--- "_pcl_gcn_pred_up" | "_pcl_dense" | "_pcl_dense_pred" | "_pcl_dense" | "_pcl_gcn" ---// sota_folder_path = randlanet/

//--- label prefix of output *.ply from competitive deep learning method ---// //--- SPG:"pred", KPConv: "preds", RandLanet: "label", PointNet, PointNet2: "pred" ---// label_string = label

//--- 0: no minus, if label start from 0; 1: minus 1, if label start from 1 ---// //---SPG: 1; KPConv: 0; RandLanet: 1; PointNet2: 0 ; PointNet: 0 ---// label_minus = 1

//--- Equal to the original sampled point cloud or not ---// //--- others: true, Randlanet: false ---// equal_cloud = false

//** Batch processing parameters **// //--- If the 'batchnames**.txt' exists then true ---// use_existing_splited_batch = false

//--- Merge small tiles into one mesh (batch_size=row, sub_batch_size=column)---// batch_size = 1 sub_batch_size = 1

//--- Check which data use batch processing and point cloud sampling ---// use_batch_processing_on_training = false use_batch_processing_on_testing = false use_batch_processing_on_predicting = false use_batch_processing_on_validation = false

//--- Use point cloud region growing or use mesh region growing --- use_pointcloud_region_growing_on_training = false use_pointcloud_region_growing_on_testing = false use_pointcloud_region_growing_on_predicting = false use_pointcloud_region_growing_on_validation = false

//--- Merge segments, usually for merged mesh region growing ---// use_merged_segments_on_training = false use_merged_segments_on_testing = false use_merged_segments_on_predicting = false use_merged_segments_on_validation = false adjacent_radius = 0.5 adjacent_pt2plane_distance = 0.25 adjacent_seg_angle = 90

//--- Point cloud region growing parameters ---// //adpative mesh: 0.5, 90; dense mesh: 0.3, 45 pcl_distance_to_plane = 0.5 pcl_accepted_angle = 45 pcl_minimum_region_size = 0 pcl_k_nn = 12

//---save texture in the output folder of each batch from original input ---// save_textures_in_predict = false

//** Feature dictionary specification (not parameters) **// use_feas { {segment_basic_features, 0}, {segment_eigen_features, 1}, {segment_color_features, 2}, {elevation_features, 3} };

use_mulsc_eles { {"10.0", 0}, {"20.0", 1}, {"40.0", 2} };

basic_feature_base_names { {"avg_center_z", 0}, {"interior_mat_radius", 1}, {"sum_area", 2}, {"relative_elevation", 3}, {"triangle_density", 4} };

eigen_feature_base_names { {"eigen_1", 0}, {"eigen_2", 1}, {"eigen_3", 2}, {"verticality", 3}, {"linearity", 4}, {"planarity", 5}, {"sphericity", 6}, {"anisotropy", 7}, {"eigenentropy", 8}, {"omnivariance", 9}, {"sumeigenvals", 10}, {"curvature", 11}, {"verticality_eig1", 12}, {"verticality_eig3", 13}, {"surface", 14}, {"volume", 15}, {"absolute_eigvec_1_moment_1st", 16}, {"absolute_eigvec_2_moment_1st", 17}, {"absolute_eigvec_3_moment_1st", 18}, {"absolute_eigvec_1_moment_2nd", 19}, {"absolute_eigvec_2_moment_2nd", 20}, {"absolute_eigvec_3_moment_2nd", 21}, {"vertical_moment_1st", 22}, {"vertical_moment_2nd", 23}, {"uniformity", 24} };

color_feature_base_names { {"red", 0}, {"green", 1}, {"blue", 2}, {"hue", 3}, {"sat", 4}, {"val", 5}, {"greenness", 6}, {"red_var", 7}, {"green_var", 8}, {"blue_var", 9}, {"hue_var", 10}, {"sat_var", 11}, {"val_var", 12}, {"greenness_var", 13}, {"hue_bin_0", 14}, {"hue_bin_1", 15}, {"hue_bin_2", 16}, {"hue_bin_3", 17}, {"hue_bin_4", 18}, {"hue_bin_5", 19}, {"hue_bin_6", 20}, {"hue_bin_7", 21}, {"hue_bin_8", 22}, {"hue_bin_9", 23}, {"hue_bin_10", 24}, {"hue_bin_11", 25}, {"hue_bin_12", 26}, {"hue_bin_13", 27}, {"hue_bin_14", 28}, {"sat_bin_0", 29}, {"sat_bin_1", 30}, {"sat_bin_2", 31}, {"sat_bin_3", 32}, {"sat_bin_4", 33}, {"val_bin_0", 34}, {"val_bin_1", 35}, {"val_bin_2", 36}, {"val_bin_3", 37}, {"val_bin_4", 38} };

How can I fix it?

WeixiaoGao commented 2 months ago

Hi, thank you for your interest. The competition methods themselves are not embedded in the program. The program is used only to generate the input for the competition methods and to evaluate the output from them. Therefore, you need to run each competition method separately, as mentioned here. You can use the modified framework we provide in the repository here.

HanyuanYU commented 2 months ago

Thank you for your reply and Happy Mid-Autumn Festival!!