tudelft3d / SUMS-Semantic-Urban-Mesh-Segmentation-public

SUMS: Semantic Urban Mesh Segmentation.
GNU General Public License v3.0
56 stars 14 forks source link

How to evaluate sota correctlly #15

Closed titlezi closed 1 year ago

titlezi commented 1 year ago

Hi, Doctor Gao: I use kpconv in your code to generate the semantic pointcloud. Then, I move the results of kpconv to ./sota/kpconv/semantic_pointcloud/test/ or ./sota/kpconv/semantic_pointcloud/ folder. In addition, I revise the config.txt. However, I alway meet the following error. The code maybe go to read the first sampled pointcloud Tile_+1984+2688, then broke.

微信图片_20230517160602

WeixiaoGao commented 1 year ago

Hi, ./sota/kpconv/semantic_pointcloud/test/ is the correct path to put the KPConv results. From the information you provide, it seems that you do not have the original sampled point cloud (from mesh) of kponcv in ./sota/kpconv/sampled_pointcloud/test/. If you do have, can you share your config file?

titlezi commented 1 year ago

Hi, ./sota/kpconv/semantic_pointcloud/test/ is the correct path to put the KPConv results. From the information you provide, it seems that you do not have the original sampled point cloud (from mesh) of kponcv in ./sota/kpconv/sampled_pointcloud/test/. If you do have, can you share your config file? Hi, doctor Gao: thanks for your reply. I have put the original sampled point cloud to ./sota/kpconv/sampled_pointcloud/test/, and the following is my config file.


//****************** Operation mode ******************//
//--- all modes: Pipeline, Mesh_feature_extraction, Train_config, Test_config, Train_and_Test_config, Data_evaluation_for_all_tiles_config, Save_mesh_features_for_visualization, Class_statistics, Generate_semantic_sampled_points, Moha_Verdi_SOTA_pipeline, Evaluation_SOTA ---//

operating_mode = Evaluation_SOTA

//** Common parameters **// //--- path of data ---// root_path = D:/Dataset/data_demo/

//--- path of 'seg_aug.py' ---// seg_aug_py_path = D:/Dataset/SUMS_Win10_1.1.2/

//--- 0: RF(SUM Paper), 1: SOTA (Competition methods) ---// processing_mode = 1

//--- Select data to process, make sure your data is in the corresponding folder ---// process_train = true process_test = true process_predict = true process_validate = true

//--- Label definition name ---// label_definition = label

//--- Class name, in *.ply file, make sure that unlabelled is (-1), unclassified is (0), other classes start from (1) ---// labels_name = terrain,high_vegetation,building,water,car,boat

//--- Class color is normalized within [0, 1] ---// labels_color = 170,85,0 ; 0,255,0 ; 255,255,0 ; 0,255,255 ; 255,0,255 ; 0,0,153

//--- labels are ignored in classification, default has no ignored labels --- ignored_labels_name = default

//** Mesh_feature_extraction **// //--- The mesh should have texture, otherwise the classification will not perform very well ---// with_texture = true

//--- Select intermediate data to save ---// save_sampled_pointclouds = false save_oversegmentation_mesh = true save_tex_cloud = false save_textures_in_predict = false

//--- For compute relative elevation (meters) ---// multi_scale_ele_radius = 10.0,20.0,40.0 long_range_radius_default = default local_ground_segs = default

//--- For generate HSV histogram ---// hsv_bins = 15,5,5

//--- Interior medial axis transform parameters ---// mat_delta_convergance = default mat_initialized_radius = default mat_denoising_seperation_angle = default mat_iteration_limit_number = default

//--- For custom or existing segments, make sure you have the field face_segment_id ---// use_existing_mesh_segments_on_training = false use_existing_mesh_segments_on_testing = false use_existing_mesh_segments_on_predicting = false use_existing_mesh_segments_on_validation = false

//--- settings for texture color processing --- // //If it is set to true then it consumes a lot of memory, if not then we use face average color which save memories but it has less accurate color feature //recommendation: adaptive triangles: true, dense triangles: false; use_face_pixels_color_aggregation_on_training = true use_face_pixels_color_aggregation_on_testing = true use_face_pixels_color_aggregation_on_predicting = true use_face_pixels_color_aggregation_on_validation = true

//--- Region growing for mesh over-segmentation ---// mesh_distance_to_plane = 0.5 mesh_accepted_angle = 90.0 mesh_minimum_region_size = 0.0 partition_folder_path = segments/ partition_prefixs = _mesh_seg

//--- Feature selection, check the end of this file for feature dictionary; the multiscale elevation (use_mulsc_eles) corresponding to (multi_scale_ele_radius) ---// use_feas = 0,1,2,3 use_mulsc_eles = 0,1,2 use_basic_features = 0,1,2,3,4 use_eigen_features = 3,4,6,11 use_color_features = 3,4,5,6,10,11,12,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38

//--- Used sampled point clouds instead of triangle faces (only for merged mesh that has topological interstice); In order to compute relative elevation faster, a sparse point cloud will be sampled by default, these settings are for all datasets (train, test, predict, validate) ---// is_pointclouds_exist = false sampling_point_density = 1.0 ele_sampling_point_density = default

//---- Use which strategy to sample point clouds on mesh ---// //0: sampled only; 1: face center only; 2: face center plus face vertices; 3: sampled and face center; 4: sampled and face center plus face vertices. //0: for relative elevation (automatic); 1,2: for mesh feature computation; 3,4: for merged mesh feature computation; 3: for deep learning sampled point clouds.
sampling_strategy_training = 2 sampling_strategy_testing = 2 sampling_strategy_predicting = 2 sampling_strategy_validation = 2

//** Train_and_Test parameters **// //--- Random forest parameters ---// rf_tree_numbers = 200 rf_tree_depth = 50

//--- smote data augmentation, it will call python file, make sure you have installed 'imbalanced-learn' and replaced 'filter.py' in 'Lib\site-packages\imblearn\over_sampling_smote' with ours ---// augmented_data_exist = false enable_augment = true used_k_neighbors = 15

//--- save intermediate data when run the 'Test' ---// save_feature_importance = true save_error_map = true

//** Class_statistics **// //--- 0: mesh, 1: sampled point cloud ---// input_type_for_statistics = 0

//** SOTA: Moha_Verdi_SOTA_pipeline **// //--- Mesh neighborhood radius, for initial planarity to generate region growing seeds ---// short_range_radius_default = default

//--- Mesh neighborhood radius, for face color ---// mr_facet_neg_sphericial_radius = default

//--- Mesh region growing parameters ---// mr_limit_distance = default mr_angle_thres = default mr_L1_color_dis = default mr_max_sp_area = default

//--- MRF formulation, regularization parameter is (mrf_lambda_mh) ---// mrf_lambda_mh = 0.5 mrf_energy_amplify = default

//** SOTA : all deep learning methods **// //--- add point color for sampled point cloud, tune 'sampling_point_density' for sampling density ---// add_point_color_for_dp_input = true

//--- path for save the data ---// //--- "spg/" | "kpconv/" | "randlanet/" | "pointnet2/" | "pointnet/" ---// //--- "_pcl_gcn_pred_up" | "_pcl_dense" | "_pcl_dense_pred" | "_pcl_dense" | "_pcl_gcn" ---// sota_folder_path = kpconv/

//--- label prefix of output *.ply from competitive deep learning method ---// //--- SPG:"pred", KPConv: "preds", RandLanet: "label", PointNet, PointNet2: "pred" ---// label_string = preds

//--- 0: no minus, if label start from 0; 1: minus 1, if label start from 1 ---// //---SPG: 1; KPConv: 0; RandLanet: 1; PointNet2: 0 ; PointNet: 0 ---// label_minus = 0

//--- Equal to the original sampled point cloud or not ---// //--- others: true, Randlanet: false ---// equal_cloud = true

//** Batch processing parameters **// //--- If the 'batchnames**.txt' exists then true ---// use_existing_splited_batch = false

//--- Merge small tiles into one mesh (batch_size=row, sub_batch_size=column)---// batch_size = 1 sub_batch_size = 1

//--- Check which data use batch processing and point cloud sampling ---// use_batch_processing_on_training = false use_batch_processing_on_testing = false use_batch_processing_on_predicting = false use_batch_processing_on_validation = false

//--- Use point cloud region growing or use mesh region growing --- use_pointcloud_region_growing_on_training = false use_pointcloud_region_growing_on_testing = false use_pointcloud_region_growing_on_predicting = false use_pointcloud_region_growing_on_validation = false

//--- Merge segments, usually for merged mesh region growing ---// use_merged_segments_on_training = false use_merged_segments_on_testing = false use_merged_segments_on_predicting = false use_merged_segments_on_validation = false adjacent_radius = 0.5 adjacent_pt2plane_distance = 0.25 adjacent_seg_angle = 90

//--- Point cloud region growing parameters ---// //adpative mesh: 0.5, 90; dense mesh: 0.3, 45 pcl_distance_to_plane = 0.5 pcl_accepted_angle = 45 pcl_minimum_region_size = 0 pcl_k_nn = 12

Yangliangzhe commented 1 year ago

@WeixiaoGao @titlezi Hi: doctor Gao and titlezi I have the same issue, and the original sampled point cloud (from mesh) is also producted by your release SUMS_Win10_binary_v1.1, and the config.txt is the same as the bro titlezi.

WeixiaoGao commented 1 year ago

Thank you for raising this issue. Indeed, there was a small bug in the code with the sampling strategy setting, which I have fixed and uploaded a new version. Also, in your config file, you need to set sampling_strategy_testing = 3 and only process_test = true for sota evaluation.