tudelft3d / SUMS-Semantic-Urban-Mesh-Segmentation-public

SUMS: Semantic Urban Mesh Segmentation.
GNU General Public License v3.0
56 stars 14 forks source link

About "Generate_semantic_sampled_points" #23

Closed Xtian-hub closed 5 days ago

Xtian-hub commented 1 month ago

I set the operating_mode = Generate_semantic_sampled_points, but the point cloud that was supposed to be segmented and labeled was not generated. Could you please let me know if I made a mistake in the settings, or where I should look to find the generated point cloud?" //** Operation mode **// //--- all modes: Pipeline, Mesh_feature_extraction, Train_config, Test_config, Train_and_Test_config, Data_evaluation_for_all_tiles_config, Save_mesh_features_for_visualization, Class_statistics, Generate_semantic_sampled_points, Moha_Verdi_SOTA_pipeline, Evaluation_SOTA ---//

operating_mode = Generate_semantic_sampled_points

//** Common parameters **// //--- path of data ---//---------------------------------------------------------------------------------------------------------- root_path = H:\tree_pack\data_demo\

//--- path of 'seg_aug.py' ---// seg_aug_py_path = H:\tree_pack\SUMS\

//--- 0: RF(SUM Paper), 1: SOTA (Competition methods) ---// processing_mode = 0

//--- Select data to process, make sure your data is in the corresponding folder ---// process_train = true process_test = true process_predict = true process_validate = true

//--- Use binary or ascii output ---// use_binary = false

//--- Label definition name ---// label_definition = test_name_label

//--- Class name, in *.ply file, make sure that unlabelled is (-1), unclassified is (0), other classes start from (1) ---// labels_name = terrain,high_vegetation,building,water,car,boat

//--- Class color is normalized within [0, 1] ---// labels_color = 170,85,0 ; 0,255,0 ; 255,255,0 ; 0,255,255 ; 255,0,255 ; 0,0,153

//--- labels are ignored in classification, default has no ignored labels --- ignored_labels_name = default

//** Mesh_feature_extraction **// //--- The mesh should have texture, otherwise the classification will not perform very well ---// with_texture = true

//--- Select intermediate data to save ---// save_sampled_pointclouds = false save_oversegmentation_mesh = false save_tex_cloud = false

//--- For compute relative elevation (meters) ---// multi_scale_ele_radius = 10.0,20.0,40.0 long_range_radius_default = default local_ground_segs = default

//--- For generate HSV histogram ---// hsv_bins = 15,5,5

//--- Interior medial axis transform parameters ---// mat_delta_convergance = default mat_initialized_radius = default mat_denoising_seperation_angle = default mat_iteration_limit_number = default

//--- For custom or existing segments, make sure you have the field face_segment_id ---// use_existing_mesh_segments_on_training = false use_existing_mesh_segments_on_testing = false use_existing_mesh_segments_on_predicting = false use_existing_mesh_segments_on_validation = false

//--- settings for texture color processing --- // //If it is set to true then it consumes a lot of memory, if not then we use face average color which save memories but it has less accurate color feature //recommendation: adaptive triangles: true, dense triangles: false; use_face_pixels_color_aggregation_on_training = true use_face_pixels_color_aggregation_on_testing = true use_face_pixels_color_aggregation_on_predicting = true use_face_pixels_color_aggregation_on_validation = true

//--- Region growing for mesh over-segmentation ---// mesh_distance_to_plane = 0.5 mesh_accepted_angle = 90.0 mesh_minimum_region_size = 0.0 partition_folder_path = segments/ partition_prefixs = _mesh_seg

//--- Feature selection, check the end of this file for feature dictionary; the multiscale elevation (use_mulsc_eles) corresponding to (multi_scale_ele_radius) ---// use_feas = 0,1,2,3 use_mulsc_eles = 0,1,2 use_basic_features = 0,1,2,3,4 use_eigen_features = 3,4,6,11 use_color_features = 3,4,5,6,10,11,12,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38

//--- Used sampled point clouds instead of triangle faces (only for merged mesh that has topological interstice); In order to compute relative elevation faster, a sparse point cloud will be sampled by default, these settings are for all datasets (train, test, predict, validate) ---// is_pointclouds_exist = false sampling_point_density = 1.0 ele_sampling_point_density = default

//---- Use which strategy to sample point clouds on mesh ---// //0: sampled only; 1: face center only; 2: face center plus face vertices; 3: sampled and face center; 4: sampled and face center plus face vertices. //0: for relative elevation (automatic); 1,2: for mesh feature computation; 3,4: for merged mesh feature computation; 3: for deep learning sampled point clouds.
sampling_strategy_training = 3 sampling_strategy_testing = 2 sampling_strategy_predicting = 2 sampling_strategy_validation = 2

//** Train_and_Test parameters **// //--- Random forest parameters ---// rf_tree_numbers = 200 rf_tree_depth = 50

//--- smote data augmentation, it will call python file, make sure you have installed 'imbalanced-learn' and replaced 'filter.py' in 'Lib\site-packages\imblearn\over_sampling_smote' with ours ---// augmented_data_exist = false enable_augment = true used_k_neighbors = 15

//--- save intermediate data when run the 'Test' ---// save_feature_importance = true save_error_map = true

//** Class_statistics **// //--- 0: mesh, 1: sampled point cloud ---// input_type_for_statistics = 0

//** SOTA: Moha_Verdi_SOTA_pipeline **// //--- Mesh neighborhood radius, for initial planarity to generate region growing seeds ---// short_range_radius_default = default

//--- Mesh neighborhood radius, for face color ---// mr_facet_neg_sphericial_radius = default

//--- Mesh region growing parameters ---// mr_limit_distance = default mr_angle_thres = default mr_L1_color_dis = default mr_max_sp_area = default

//--- MRF formulation, regularization parameter is (mrf_lambda_mh) ---// mrf_lambda_mh = 0.5 mrf_energy_amplify = default

//** SOTA : all deep learning methods **// //--- add point color for sampled point cloud, tune 'sampling_point_density' for sampling density ---// add_point_color_for_dp_input = true

//--- path for save the data ---// //--- "spg/" | "kpconv/" | "randlanet/" | "pointnet2/" | "pointnet/" ---// //--- "_pcl_gcn_pred_up" | "_pcl_dense" | "_pcl_dense_pred" | "_pcl_dense" | "_pcl_gcn" ---// sota_folder_path = kpconv/

//--- label prefix of output *.ply from competitive deep learning method ---// //--- SPG:"pred", KPConv: "preds", RandLanet: "label", PointNet, PointNet2: "pred" ---// label_string = preds

//--- 0: no minus, if label start from 0; 1: minus 1, if label start from 1 ---// //---SPG: 1; KPConv: 0; RandLanet: 1; PointNet2: 0 ; PointNet: 0 ---// label_minus = 0

//--- Equal to the original sampled point cloud or not ---// //--- others: true, Randlanet: false ---// equal_cloud = true

//** Batch processing parameters **// //--- If the 'batchnames**.txt' exists then true ---// use_existing_splited_batch = false

//--- Merge small tiles into one mesh (batch_size=row, sub_batch_size=column)---// batch_size = 1 sub_batch_size = 1

//--- Check which data use batch processing and point cloud sampling ---// use_batch_processing_on_training = false use_batch_processing_on_testing = false use_batch_processing_on_predicting = false use_batch_processing_on_validation = false

//--- Use point cloud region growing or use mesh region growing --- use_pointcloud_region_growing_on_training = false use_pointcloud_region_growing_on_testing = false use_pointcloud_region_growing_on_predicting = false use_pointcloud_region_growing_on_validation = false

//--- Merge segments, usually for merged mesh region growing ---// use_merged_segments_on_training = false use_merged_segments_on_testing = false use_merged_segments_on_predicting = false use_merged_segments_on_validation = false adjacent_radius = 0.5 adjacent_pt2plane_distance = 0.25 adjacent_seg_angle = 90

//--- Point cloud region growing parameters ---// //adpative mesh: 0.5, 90; dense mesh: 0.3, 45 pcl_distance_to_plane = 0.5 pcl_accepted_angle = 45 pcl_minimum_region_size = 0 pcl_k_nn = 12

//---save texture in the output folder of each batch from original input ---// save_textures_in_predict = true

//** Feature dictionary specification (not parameters) **// use_feas { {segment_basic_features, 0}, {segment_eigen_features, 1}, {segment_color_features, 2}, {elevation_features, 3} };

use_mulsc_eles { {"10.0", 0}, {"20.0", 1}, {"40.0", 2} };

basic_feature_base_names { {"avg_center_z", 0}, {"interior_mat_radius", 1}, {"sum_area", 2}, {"relative_elevation", 3}, {"triangle_density", 4} };

eigen_feature_base_names { {"eigen_1", 0}, {"eigen_2", 1}, {"eigen_3", 2}, {"verticality", 3}, {"linearity", 4}, {"planarity", 5}, {"sphericity", 6}, {"anisotropy", 7}, {"eigenentropy", 8}, {"omnivariance", 9}, {"sumeigenvals", 10}, {"curvature", 11}, {"verticality_eig1", 12}, {"verticality_eig3", 13}, {"surface", 14}, {"volume", 15}, {"absolute_eigvec_1_moment_1st", 16}, {"absolute_eigvec_2_moment_1st", 17}, {"absolute_eigvec_3_moment_1st", 18}, {"absolute_eigvec_1_moment_2nd", 19}, {"absolute_eigvec_2_moment_2nd", 20}, {"absolute_eigvec_3_moment_2nd", 21}, {"vertical_moment_1st", 22}, {"vertical_moment_2nd", 23}, {"uniformity", 24} };

color_feature_base_names { {"red", 0}, {"green", 1}, {"blue", 2}, {"hue", 3}, {"sat", 4}, {"val", 5}, {"greenness", 6}, {"red_var", 7}, {"green_var", 8}, {"blue_var", 9}, {"hue_var", 10}, {"sat_var", 11}, {"val_var", 12}, {"greenness_var", 13}, {"hue_bin_0", 14}, {"hue_bin_1", 15}, {"hue_bin_2", 16}, {"hue_bin_3", 17}, {"hue_bin_4", 18}, {"hue_bin_5", 19}, {"hue_bin_6", 20}, {"hue_bin_7", 21}, {"hue_bin_8", 22}, {"hue_bin_9", 23}, {"hue_bin_10", 24}, {"hue_bin_11", 25}, {"hue_bin_12", 26}, {"hue_bin_13", 27}, {"hue_bin_14", 28}, {"sat_bin_0", 29}, {"sat_bin_1", 30}, {"sat_bin_2", 31}, {"sat_bin_3", 32}, {"sat_bin_4", 33}, {"val_bin_0", 34}, {"val_bin_1", 35}, {"val_bin_2", 36}, {"val_bin_3", 37}, {"val_bin_4", 38} }; (sum) PS H:\tree_pack\SUMS> ./semantic_urban_mesh_segmentation.exe H:\tree_pack\data_demo\

Reading configuration file: Done in (s): 0.0002223

--------------------- on training data --------------------- Get all training data. Done in (s): 0.0001449

Generate sampled semantic point cloud. loading existing mesh segments: Tile+003+005L20 mesh read successful, loading time: 0.652855s loading mesh: Tile+003_+005_L20

mesh read successful, loading time: 0.432532s The total number of input triangle facets: 220539

The total number of input edges: 331899

The total number of input vertices: 111138

The total number of input texture image: 1

    - Pre-processing of input mesh: Sample the point clouds from meshes + Using face centers as sampled point clouds, done in (s): 41.3793

    Start to writing sampled point cloud ...

TODO: use binary format pointcloud saved Done in (s): 2.83253

--------------------- on test data --------------------- Get all test data. Done in (s): 0.0003805

Generate sampled semantic point cloud. loading existing mesh segments: Tile+003+005L20 mesh read successful, loading time: 0.657692s loading mesh: Tile+003_+005_L20

mesh read successful, loading time: 0.451476s The total number of input triangle facets: 220539

The total number of input edges: 331899

The total number of input vertices: 111138

The total number of input texture image: 1

    - Pre-processing of input mesh: Using face centers and vertices as sampled point clouds, done in (s): 31.348

    Start to writing sampled point cloud ...

TODO: use binary format pointcloud saved Done in (s): 1.53692

--------------------- on predict data--------------------- Get all predict data. Done in (s): 0.0004135

Generate sampled semantic point cloud. loading existing mesh segments: Tile+003+005L20 mesh read successful, loading time: 0.680514s loading mesh: Tile+003_+005

All the files I have generated are as follows: image "Tile_+003_+005_L20_pcl_sampled.ply" as follow: 80094698635f141d6c1ad1fafcf0d40

Xtian-hub commented 1 month ago

@WeixiaoGao

WeixiaoGao commented 4 weeks ago

Hi, could you please check the properties of the output file using an editor like Notepad++? Does it include only xyz and rgb values?

Xtian-hub commented 3 weeks ago

Dear Dr. Gao,

Thank you for your response. do you mean "data_demo\output\validate\batch_0\meshclassification.ply" it as follow: ply format ascii 1.0 comment TextureFile Tile+003_+005_L20_0.jpg comment label 0 unclassified comment label 1 terrain comment label 2 high_vegetation comment label 3 building comment label 4 water comment label 5 car comment label 6 boat element vertex 111138 property float x property float y property float z element face 220539 property list uint32 int vertex_indices property list uint32 float texcoord property float r property float g property float b property float nx property float ny property float nz property float label_probabilities property int face_segment_id property int face_tile_index property int label property int face_predict property int texnumber end_header 3 5096 2076 1852 6 0.328977 0.593512 0.329252 0.59229 0.33008 0.592906 0.666667 0.333333 0 0.217659 0.234001 0.947559 0.29125 656 0 -1 1 0

In addition to this, I have some further questions. I would like to adjust the algorithm so that the model can better segment trees in urban environments. Specifically, I want to modify the segmentation algorithm and add new features. I reviewed the pcl_feas file under the \feature directory, and it outputs the following features. Could you provide annotations or explanations of these feature attributes? I would like to know which parameters correspond to the features mentioned in the paper. Based on this, I plan to add or modify features for my experiments.

Here is the feature output: pcl_feas.ply ply format ascii 1.0 comment saved by gaoweixiaocuhk@gmail.com element vertex 25441 property list uint32 int mesh_faces_id property list uint32 float segment_plane_params property list uint32 float segment_basic_features property list uint32 float segment_eigen_features property list uint32 float segment_color_features property list uint32 float multiscale_elevation_features property float x property float y property float z property float segment_relative_elevation property int label property int points_tile_index property int segment_id endheader In addition to the above questions, I would also like to experiment with segmentation algorithms other than random forests. I noticed in a previous closed issue that you mentioned this file was generated using the CGAL package, which can be found at “https://doc.cgal.org/4.14.3/Classification/index.html”. From my understanding, the interface inputs of the models in this package correspond directly to the "segment***_features", is that correct? Could you provide more information on which parameters are input into the clustering models? 6ada50776a7119a6d9b50ae976ec2b1

Xtian-hub commented 3 weeks ago

Hi Dr. Gao,

I set xyz as the coordinates and outputted pcl_feas.ply in Open3D, resulting in the following output. I expected it to still resemble an urban point cloud, but the result was different. Similar to my previous question, could you provide explanations or annotations for the properties in the pcl_feas.ply file? 068c1ca5df5d1a42837cc6c76cbc548 e10a769e3477d201724700a0a256882 e4bc8bde9085d195f7ccc07f295eeb8

WeixiaoGao commented 3 weeks ago

In your first question, you mentioned setting the 'operating_mode' to 'Generate_semantic_sampled_points', but the point cloud that was supposed to be segmented and labeled was not generated. After reviewing the output files in Notepad++, it appears that the 'label' and 'face_segment_id' are indeed written in the file. Does this resolve your confusion?

WeixiaoGao commented 3 weeks ago

For your second question, you can check the Feature Dictionary Specification in the demo file, which you also mentioned in your first question.

WeixiaoGao commented 3 weeks ago

For your third question, the output 'xxx_pcl_feas' is in feature space, not in Euclidean space.

Xtian-hub commented 3 weeks ago

Thank you, Dr. Gao. Yes, I also noticed that both label and face_segment_id were written, and the output above is from a mesh file. However, in the pcl (point cloud) file, all the attributes are showing as -1, as displayed below. During my investigation, I reinstalled the related release on different drives, and it is now running normally. However, after comparing, I still haven't found the reason for the issue. ply format ascii 1.0 comment saved by gaoweixiaocuhk@gmail.com element vertex 331677 property list uint32 int points_face_belong_ids property float x property float y property float z property float r property float g property float b property float nx property float ny property float nz property int points_face_belong_id property int label property int points_tile_index end_header 7 8987 53 0 8260 8201 1903 1902 -4485.18 -3411.15 8.9218 0.564706 0.588235 0.572549 -0.157294 0.210138 0.964936 1902 -1 -1 5 8947 3332 3294 0 53 -4484.91 -3412.17 9.13372 0.247059 0.25098 0.258824 -0.425761 0.0723455 0.901939 53 -1 -1 6 9254 3309 8260 0 3294 3307 -4484.35 -3409.83 9.99051 0.533333 0.54902 0.545098 -0.84975 0.118433 0.513711 8260 -1 -1 1 0 -4484.81 -3411.05 9.34868 0.568627 0.588235 0.603922 -0.747545 -0.0654974 0.660974 0 -1 -1 6 8674 1 2111 2104 8092 8255 -4497.95 -3373.73 8.02935 0.65098 0.745098 0.760784 -0.62235 0.220084 0.751162 8092 -1 -1 4 8674 5370 8673 1 -4499 -3373.33 7.88315 0.0392157 0.160784 0.231373 -0.0878932 0.130561 0.987537 8674 -1 -1 7 8717 1723 1671 2111 1 8673 8695 -4499.11 -3374.42 8.33891 0.65098 0.890196 1 -0.103148 -0.103553 0.989261 1671 -1 -1 1 1 -4498.69 -3373.83 8.0838 0.545098 0.647059 0.65098 0.0163167 0.38294 0.923629 1 -1 -1 8 9649 9514 9515 9466 529 2 9390 4736 -4497.24 -3472.79 10.1696 0.505882 0.517647 0.207843 0.897203 -0.441509 -0.0098194 9390 -1 -1 5 8309 2 529 3890 3888 -4497.23 -3474.01 11.775 0.631373 0.615686 0.329412 0.818876 -0.141203 -0.55633 3890 -1 -1 5 9769 8070 9390 2 8309 -4498.03 -3474.41 10.5739 0.607843 0.6 0.298039 0.662639 -0.668443 -0.337777 9769 -1 -1 1 2 -4497.5 -3473.74 10.8395 0.576471 0.564706 0.262745 0.793817 -0.481367 -0.371674 2 -1 -1

Xtian-hub commented 3 weeks ago

I have another question: does Generate_semantic_sampled_points sample the mesh models from the output/test files into point clouds? Additionally, if I want to reassign the point cloud colors back to the mesh, should I place the modified point cloud into tran/test... and set "Train_config"... "Save_mesh_features_for_visualization"?

WeixiaoGao commented 3 weeks ago

I noticed you have changed the default parameter 'label_definition = test_name_label'. Is 'test_name_label' the name of your label in your annotated mesh? If not, please revert it to the default. This might solve your issue. 'Generate_semantic_sampled_points' primarily samples the input mesh into point clouds, not output. This can be applied to training, validation, and testing datasets. You can use 'Evaluation_SOTA' to restore the color. However, if you are using your own method, you may need to update the parameters in 'SOTA: all deep learning methods' to adapt to your approach.

Xtian-hub commented 3 weeks ago

Thank you very much, Dr. Gao.

the reason I made the changes was because I was searching for the attribute that allows the point cloud to replicate labels. I once managed to succeed accidentally, but I don’t know how it happened. In the Input folder, it generated a file named Tile+003+005_L20_meshclassification.ply, with the following attributes: ply format ascii 1.0 comment TextureFile Tile+003_+005_L20_0.jpg comment label 0 unclassified comment label 1 terrain comment label 2 high_vegetation comment label 3 building comment label 4 water comment label 5 car comment label 6 boat element vertex 111138 property float x property float y property float z element face 220539 property list uint32 int vertex_indices property list uint32 float texcoord property float r property float g property float b property float nx property float ny property float nz property float label_probabilities property int face_segment_id property int face_tile_index property int label property int texnumber end_header 3 1280 8286 8285 6 0.868823 0.578676 0.868957 0.578042 0.869573 0.579027 0 1 0 0.0299309 0.509758 0.859797 0.52875 2299 0 2 0 3 189 1671 1813 6 0.459178 0.67424 0.459201 0.673406 0.45988 0.673524 0 1 0 0.744361 -0.534491 0.400308 0.617917 14092 0 2 0 image I believe this is the correct classification file format, but I haven't been able to generate this file again after rerunning the process. All other files have the following attribute format, and I believe that face_predict should correspond to the original label attribute, though I’m not sure why it is named facepredict instead of label. Furthermore, the label I set appears to be named before it. This is an issue I noticed while troubleshooting, especially since you mentioned that ply files in the Input folder would be used for point cloud sampling. ply format ascii 1.0 comment TextureFile Tile+003_+005_L20_0.jpg comment label 0 unclassified comment label 1 terrain comment label 2 high_vegetation comment label 3 building comment label 4 water comment label 5 car comment label 6 boat element vertex 111138 property float x property float y property float z element face 220539 property list uint32 int vertex_indices property list uint32 float texcoord property float r property float g property float b property float nx property float ny property float nz property float label_probabilities property int face_segment_id property int face_tile_index property int labeladwadsd property int face_predict property int texnumber end_header So, I set various label_definition values to check where the issue might be.

//--- Label definition name ---// label_definition = label

Is my judgment correct? Also, why is the face_predict parameter generated while the mesh file in ply format, which should have a proper label attribute, is not generated in the Input folder by Test_config?

I ran the steps in the following order: Mesh_feature_extraction, Test_config, and Generate_semantic_sampled_points. The files generated in this order are as follows:

e906893d49528656ee27dc9a104cc9c 4c9345f862afdaa485c81328f2c79fb 5769c396d6860c41edfe8fca7545fa4

Is there anything incorrect in my settings? Below, I have reverted label to its default configuration. //** Operation mode **// //--- all modes: Pipeline, Mesh_feature_extraction, Train_config, Test_config, Train_and_Test_config, Data_evaluation_for_all_tiles_config, Save_mesh_features_for_visualization, Class_statistics, Generate_semantic_sampled_points, Moha_Verdi_SOTA_pipeline, Evaluation_SOTA ---//

operating_mode = Generate_semantic_sampled_points

//** Common parameters **// //--- path of data ---//---------------------------------------------------------------------------------------------------------- root_path = M:/umat/data_demo/

//--- path of 'seg_aug.py' ---// seg_aug_py_path = M:/umat/SUMS_Win10_1.3.1/

//--- 0: RF(SUM Paper), 1: SOTA (Competition methods) ---// processing_mode = 0

//--- Select data to process, make sure your data is in the corresponding folder ---// process_train = false process_test = false process_predict = false process_validate = true

//--- Use binary or ascii output ---// use_binary = false

//--- Label definition name ---// label_definition = label

//--- Class name, in *.ply file, make sure that unlabelled is (-1), unclassified is (0), other classes start from (1) ---// labels_name = terrain,high_vegetation,building,water,car,boat

//--- Class color is normalized within [0, 1] ---// labels_color = 170,85,0 ; 0,255,0 ; 255,255,0 ; 0,255,255 ; 255,0,255 ; 0,0,153

//--- labels are ignored in classification, default has no ignored labels --- ignored_labels_name = default

//** Mesh_feature_extraction **// //--- The mesh should have texture, otherwise the classification will not perform very well ---// with_texture = true

//--- Select intermediate data to save ---// save_sampled_pointclouds = false save_oversegmentation_mesh = true save_tex_cloud = false

//--- For compute relative elevation (meters) ---// multi_scale_ele_radius = 10.0,20.0,40.0 long_range_radius_default = default local_ground_segs = default

//--- For generate HSV histogram ---// hsv_bins = 15,5,5

//--- Interior medial axis transform parameters ---// mat_delta_convergance = default mat_initialized_radius = default mat_denoising_seperation_angle = default mat_iteration_limit_number = default

//--- For custom or existing segments, make sure you have the field face_segment_id ---// use_existing_mesh_segments_on_training = false use_existing_mesh_segments_on_testing = false use_existing_mesh_segments_on_predicting = false use_existing_mesh_segments_on_validation = false

//--- settings for texture color processing --- // //If it is set to true then it consumes a lot of memory, if not then we use face average color which save memories but it has less accurate color feature //recommendation: adaptive triangles: true, dense triangles: false; use_face_pixels_color_aggregation_on_training = true use_face_pixels_color_aggregation_on_testing = true use_face_pixels_color_aggregation_on_predicting = true use_face_pixels_color_aggregation_on_validation = true

//--- Region growing for mesh over-segmentation ---// mesh_distance_to_plane = 0.5 mesh_accepted_angle = 90.0 mesh_minimum_region_size = 0.0 partition_folder_path = segments/ partition_prefixs = _mesh_seg

//--- Feature selection, check the end of this file for feature dictionary; the multiscale elevation (use_mulsc_eles) corresponding to (multi_scale_ele_radius) ---// use_feas = 0,1,2,3 use_mulsc_eles = 0,1,2 use_basic_features = 0,1,2,3,4 use_eigen_features = 3,4,6,11 use_color_features = 3,4,5,6,10,11,12,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38

//--- Used sampled point clouds instead of triangle faces (only for merged mesh that has topological interstice); In order to compute relative elevation faster, a sparse point cloud will be sampled by default, these settings are for all datasets (train, test, predict, validate) ---// is_pointclouds_exist = false sampling_point_density = 1.0 ele_sampling_point_density = default

//---- Use which strategy to sample point clouds on mesh ---// //0: sampled only; 1: face center only; 2: face center plus face vertices; 3: sampled and face center; 4: sampled and face center plus face vertices. //0: for relative elevation (automatic); 1,2: for mesh feature computation; 3,4: for merged mesh feature computation; 3: for deep learning sampled point clouds.
sampling_strategy_training = 3 sampling_strategy_testing = 2 sampling_strategy_predicting = 2 sampling_strategy_validation = 2

//** Train_and_Test parameters **// //--- Random forest parameters ---// rf_tree_numbers = 200 rf_tree_depth = 50

//--- smote data augmentation, it will call python file, make sure you have installed 'imbalanced-learn' and replaced 'filter.py' in 'Lib\site-packages\imblearn\over_sampling_smote' with ours ---// augmented_data_exist = false enable_augment = true used_k_neighbors = 15

//--- save intermediate data when run the 'Test' ---// save_feature_importance = true save_error_map = true

//** Class_statistics **// //--- 0: mesh, 1: sampled point cloud ---// input_type_for_statistics = 0

//** SOTA: Moha_Verdi_SOTA_pipeline **// //--- Mesh neighborhood radius, for initial planarity to generate region growing seeds ---// short_range_radius_default = default

//--- Mesh neighborhood radius, for face color ---// mr_facet_neg_sphericial_radius = default

//--- Mesh region growing parameters ---// mr_limit_distance = default mr_angle_thres = default mr_L1_color_dis = default mr_max_sp_area = default

//--- MRF formulation, regularization parameter is (mrf_lambda_mh) ---// mrf_lambda_mh = 0.5 mrf_energy_amplify = default

//** SOTA : all deep learning methods **// //--- add point color for sampled point cloud, tune 'sampling_point_density' for sampling density ---// add_point_color_for_dp_input = true

//--- path for save the data ---// //--- "spg/" | "kpconv/" | "randlanet/" | "pointnet2/" | "pointnet/" ---// //--- "_pcl_gcn_pred_up" | "_pcl_dense" | "_pcl_dense_pred" | "_pcl_dense" | "_pcl_gcn" ---// sota_folder_path = kpconv/

//--- label prefix of output *.ply from competitive deep learning method ---// //--- SPG:"pred", KPConv: "preds", RandLanet: "label", PointNet, PointNet2: "pred" ---// label_string = preds

//--- 0: no minus, if label start from 0; 1: minus 1, if label start from 1 ---// //---SPG: 1; KPConv: 0; RandLanet: 1; PointNet2: 0 ; PointNet: 0 ---// label_minus = 0

//--- Equal to the original sampled point cloud or not ---// //--- others: true, Randlanet: false ---// equal_cloud = true

//** Batch processing parameters **// //--- If the 'batchnames**.txt' exists then true ---// use_existing_splited_batch = false

//--- Merge small tiles into one mesh (batch_size=row, sub_batch_size=column)---// batch_size = 1 sub_batch_size = 1

//--- Check which data use batch processing and point cloud sampling ---// use_batch_processing_on_training = false use_batch_processing_on_testing = false use_batch_processing_on_predicting = false use_batch_processing_on_validation = false

//--- Use point cloud region growing or use mesh region growing --- use_pointcloud_region_growing_on_training = false use_pointcloud_region_growing_on_testing = false use_pointcloud_region_growing_on_predicting = false use_pointcloud_region_growing_on_validation = false

//--- Merge segments, usually for merged mesh region growing ---// use_merged_segments_on_training = false use_merged_segments_on_testing = false use_merged_segments_on_predicting = false use_merged_segments_on_validation = false adjacent_radius = 0.5 adjacent_pt2plane_distance = 0.25 adjacent_seg_angle = 90

//--- Point cloud region growing parameters ---// //adpative mesh: 0.5, 90; dense mesh: 0.3, 45 pcl_distance_to_plane = 0.5 pcl_accepted_angle = 45 pcl_minimum_region_size = 0 pcl_k_nn = 12

//---save texture in the output folder of each batch from original input ---// save_textures_in_predict = false

//** Feature dictionary specification (not parameters) **// use_feas { {segment_basic_features, 0}, {segment_eigen_features, 1}, {segment_color_features, 2}, {elevation_features, 3} };

use_mulsc_eles { {"10.0", 0}, {"20.0", 1}, {"40.0", 2} };

basic_feature_base_names { {"avg_center_z", 0}, {"interior_mat_radius", 1}, {"sum_area", 2}, {"relative_elevation", 3}, {"triangle_density", 4} };

eigen_feature_base_names { {"eigen_1", 0}, {"eigen_2", 1}, {"eigen_3", 2}, {"verticality", 3}, {"linearity", 4}, {"planarity", 5}, {"sphericity", 6}, {"anisotropy", 7}, {"eigenentropy", 8}, {"omnivariance", 9}, {"sumeigenvals", 10}, {"curvature", 11}, {"verticality_eig1", 12}, {"verticality_eig3", 13}, {"surface", 14}, {"volume", 15}, {"absolute_eigvec_1_moment_1st", 16}, {"absolute_eigvec_2_moment_1st", 17}, {"absolute_eigvec_3_moment_1st", 18}, {"absolute_eigvec_1_moment_2nd", 19}, {"absolute_eigvec_2_moment_2nd", 20}, {"absolute_eigvec_3_moment_2nd", 21}, {"vertical_moment_1st", 22}, {"vertical_moment_2nd", 23}, {"uniformity", 24} };

color_feature_base_names { {"red", 0}, {"green", 1}, {"blue", 2}, {"hue", 3}, {"sat", 4}, {"val", 5}, {"greenness", 6}, {"red_var", 7}, {"green_var", 8}, {"blue_var", 9}, {"hue_var", 10}, {"sat_var", 11}, {"val_var", 12}, {"greenness_var", 13}, {"hue_bin_0", 14}, {"hue_bin_1", 15}, {"hue_bin_2", 16}, {"hue_bin_3", 17}, {"hue_bin_4", 18}, {"hue_bin_5", 19}, {"hue_bin_6", 20}, {"hue_bin_7", 21}, {"hue_bin_8", 22}, {"hue_bin_9", 23}, {"hue_bin_10", 24}, {"hue_bin_11", 25}, {"hue_bin_12", 26}, {"hue_bin_13", 27}, {"hue_bin_14", 28}, {"sat_bin_0", 29}, {"sat_bin_1", 30}, {"sat_bin_2", 31}, {"sat_bin_3", 32}, {"sat_bin_4", 33}, {"val_bin_0", 34}, {"val_bin_1", 35}, {"val_bin_2", 36}, {"val_bin_3", 37}, {"val_bin_4", 38} }; However, the generated point cloud file still has label set to -1.

I’d like to propose a scenario: if I update the labels to include more categories or different ones from the current model, such as “window,” “roof,” etc., and I want to sample these labels as point clouds, should I place the labeled mesh models into the input/test../train folders and then skip other parameter configurations, directly running Generate_semantic_sampled_points? Conversely, if I have point cloud data with labels, should I place them in the pointcloud folder and run Evaluation_SOTA directly to transfer the point cloud labels back to the mesh model?

Xtian-hub commented 3 weeks ago

Hi, I swapped 'face_predict' and 'label' using the following code and added the suffix "_mesh_classification" to the file name to make the point cloud sampling results appear normal. However, I don't understand why the label would be misaligned with the face_predict attribute. from plyfile import PlyData, PlyElement import os input_file = "M:/umat/data_demo/output/validate/dawd_mesh_classification.ply" ply_data = PlyData.read(input_file) face_data = ply_data['face'].data new_face_data = face_data.copy() new_face_data['face_predict'] = face_data['label'] new_face_data['label'] = face_data['face_predict'] new_face_element = PlyElement.describe(new_face_data, 'face') new_ply_data = PlyData([ply_data['vertex'], new_face_element], text=ply_data.text) output_file = r"M:\umat\data_demo\input\validate\dawd_clas.ply" new_ply_data.write(output_file)

WeixiaoGao commented 3 weeks ago

Hi, could you please separate your questions next time? A very long comment full of questions makes it difficult for me to maintain focus. If you have a question unrelated to the current issue, please create a new issue.

Back to your question:

Xtian-hub commented 3 weeks ago

Thank you for your response, and I apologize for my mistake. Currently, I have a question regarding the execution result of test_config, which generated the ***_mesh_classification.ply file. In this file, the facepredict attribute holds the values that should belong to the label attribute, while the values in the label attribute are all -1. Here’s a snippet of the data: ply format ascii 1.0 comment TextureFile Tile+003_+005_L20_0.jpg comment label 0 unclassified comment label 1 terrain comment label 2 high_vegetation comment label 3 building comment label 4 water comment label 5 car comment label 6 boat element vertex 111138 property float x property float y property float z element face 220539 property list uint32 int vertex_indices property list uint32 float texcoord property float r property float g property float b property float nx property float ny property float nz property float label_probabilities property int face_segment_id property int face_tile_index property int label property int face_predict property int texnumber end_header 3 5096 2076 1852 6 0.328977 0.593512 0.329252 0.59229 0.33008 0.592906 0.666667 0.333333 0 0.217659 0.234001 0.947559 0.29125 656 0 -1 1 0

WeixiaoGao commented 3 weeks ago

If you do not have ground truth labels for your test file, then, of course, the 'label' is set to a default value. 'face_predict' only saves the label that is predicted by our program.

Xtian-hub commented 3 weeks ago

So, what are the steps to use “trained_model.gz” to perform label prediction on my unlabeled data and correctly generate a point cloud with labels? The reason I'm asking is that I placed my own PLY file with jpg into Input/train...(input like image ), and after running "Mesh_feature_extraction - Test_config - Generate_semantic_sampled_points," .And I have set 'label_definition = label' .the generated point cloud file and the _mesh_segement.ply have a label attribute value of -1, but the face_pre not.

WeixiaoGao commented 2 weeks ago

In Test_config, you can use our pretrained model “trained_model.gz” as explained here. It can be used for prediction on unlabeled data.

Xtian-hub commented 2 weeks ago

Thank you for your patient response, but it seems that I didn't express myself clearly enough. I used your pre-trained "trained_model.gz" to predict the unlabeled data and obtained prediction results as well. However, the difference is that the prediction results appeared in the "face_predict" attribute instead of the "label" attribute. This issue is causing the subsequent step "Generate_semantic_sampled_points" to not correctly obtain the labels, resulting in all of them showing as -1. I believe my steps and the file structure are correct, but I am unsure why this issue has occurred. I placed the pre-trained "trained_model.gz" in the "model" directory, and my execution order was "Mesh_feature_extraction - Test_config - Generate_semantic_sampled_points". Additionally, I placed the necessary obj and jpg files in the "Input" folder. I have resolved my issue by using a script to replace the values of the "face_predict" and "label" attributes. I suspect there might be a bug in SUMS_Win10_binary_v1.3.1 that caused this situation. If not, please forgive my assumption, as it could have been due to an error in one of my steps that caused the program to run incorrectly. Lastly, thank you, Dr. Gao.