tudelft3d / PSSNet

PSSNet: Planarity-sensible Semantic Segmentation of Large-scale Urban Meshes
GNU General Public License v3.0
36 stars 6 forks source link

What is the true dataset of step2? #4

Closed JurinJC closed 10 months ago

JurinJC commented 1 year ago

Hi author: In step2, if I must train the network? Does all the inputs of step2 generated from step1(train test and predictions)?

Yours! Best regards! Jurin

WeixiaoGao commented 1 year ago

Please refer to here on how to perform the step-2. And you can download the pre-trained model from here.

SpikeXCZ commented 1 year ago

I have similar question that what folders should I build for step2, and where should I put the over-seg results in? I'm confused about the readme command "--dataset custom_dataset --ROOT_PATH $CUSTOM_SET_DIR", i didn't get any "custom**" named files in step1...

WeixiaoGao commented 1 year ago

You need to create two folders to store the results of step 1: ../datasets/custom_set/data/ and ../datasets/custom_set/pssnet_graphs/. The remaining folders will be created automatically. Please refer to the above reply for more information on where to place the over-segmentation results. You can ignore "custom_dataset" as we don't really use it at this stage to differentiate between other datasets.

JurinJC commented 1 year ago

Please refer to here on how to perform the step-2. And you can download the pre-trained model from here.

But I can't find the train data for step2.

WeixiaoGao commented 1 year ago

The training data for step 2 can be generated in step 1, and the size of the data can be very large.

JurinJC commented 1 year ago

I have run the step1 with the mode(PSSNet pipeline for GCN), the result of test&validate(in pssnet_graphs/pcl) can be generated, but I still can't get the train data for step2, and teh trained model generated from step1 is very small. 企业微信截图_16923441511813 企业微信截图_16923445215814

WeixiaoGao commented 1 year ago

Hi, thanks for pointing out this issue. I have fixed this small bug in the "dev" branch and created a PR. You can update the project and use 'dev' to get the training data for step2.

JurinJC commented 1 year ago

if I only want to run my predict data using your model of step2, can I only run python partition/pssnet_visualize.py --dataset custom_dataset --ROOT_PATH $CUSTOM_SET_DIR --res_file "../datasets/custom_set/results/predictions_test" --output_type r? Also what is the res_file(predictions_test.h5), I don't have this file.

WeixiaoGao commented 1 year ago

pssnet_visualize.py is only used to write visualization results. Running your prediction data is similar to running the test data like here. For the trained model and other related files,see here.

JurinJC commented 1 year ago

When I run the test data like the example, I got an error like this: image I can't fix it. Is this error related to the torch version? What is your torch version

WeixiaoGao commented 1 year ago

Hi, I am using Pytorch 1.12 if I remember correctly, but I don't think this is related to the Pytorch version. The parameters of the training model don't seem to match the current code. I'm not sure what is causing this, maybe it's because of the new data. Can you try to retrain the model or test it on the SUM dataset?

JurinJC commented 1 year ago

Hi, actually I use the SUM dataset you provided image they are generated from step1

JurinJC commented 1 year ago

When I retrain the model using the SUM dataset, I got an error again, like this: 企业微信截图_16929375111141 企业微信截图_16929375499678 When I load the .h5 files, I cant find where the parameter "entry" comes from, what " G.vs[s]['v']" actually is.

JurinJC commented 1 year ago

In this code:

cloud, add_feas = load_superpoint(args, db_path + '/features/' + fname + '.h5', G.vs[s]['v'], train, test_seed_offset, s_label)

yours is "/parsed/", I dont have this folder, so I change it to '/features/'.

WeixiaoGao commented 1 year ago

Please do not change any of the names of the folders as this will cause unexpected errors. You do not have "/parsed/" because you did not run the preprocessing step to generate them. You need to run the pssnet_custom_dataset.py to generate the "/parsed/" folder before you run the pssnet_main.py. Don't forget to run pssnet_partition.py to generate '/features/' at first. So the pipeline is pssnet_partition.py -> pssnet_custom_dataset.py -> pssnet_main.py -> pssnet_visualize.py.

JurinJC commented 1 year ago

When I run only predict data with operating_mode = PSSNet_pipeline_for_GCN, I can only get result like this 企业微信截图_16933147745894 this is the classifaction 企业微信截图_16933149582773 this is mesh_seg in segments 企业微信截图_16933150116386 I can't run PSSNet_oversegmentation and subsequent operations, so I cant get the predict dataset for step2.

WeixiaoGao commented 1 year ago

Have you updated your code and switched to the dev branch? The current main branch contains bugs that will be merged by dev after the updated code has been reviewed.

JurinJC commented 1 year ago

I have switched to dev branch. However, when I load trained model, I always got something error.

WeixiaoGao commented 1 year ago

Could you provide more information about the error?

JurinJC commented 12 months ago

Thank for your help, I have done the hole code, but my result is very bad, can you give my some help? (used my own data) This is the input mesh: 企业微信截图_16940781989441

This is the result of step1: 企业微信截图_16940782492385

This is the seg result: 企业微信截图_16940782961517

WeixiaoGao commented 11 months ago

Hi, it seems your mesh contains topological errors such as duplicated or non-manifold vertices, which could lead to bad results. These errors can break the continuity of the mesh surface and can be difficult to fix, especially if the vertices are related to texture coordinates.

JurinJC commented 11 months ago

I have fix the input, but the result is bad too, can you give me some advice? 企业微信截图_16943983892727 企业微信截图_16943984077102 企业微信截图_16943984145631

WeixiaoGao commented 11 months ago

For me, this result is better than your previous one. You can further fine tune the "balance parameter" to get better results on your dataset. However, from your second image, the classification of planar and non-planar is not as good as expected. I guess you are not using colour information?

JurinJC commented 11 months ago

Thanks for your answer. Actually I have used the texture, so I think I also used the colour information.

WeixiaoGao commented 11 months ago

If you use our pre-trained model on your data, the results for planar and non-planar classification are acceptable because of the significant domain gap between your data and our benchmark dataset.