Closed NiuYingchun closed 8 months ago
For scannet, you can use the following scripts to test. the last script of this file
use flag TRAINER.run_scannet_test
and TRAINER.scannet_testset_output_result_path ${YOUR_PATH_TO_SCANNET_TESTSET_OUTPUT_RESULT}
CUDA_VISIBLE_DEVICES=0 python launch.py ddp_train.py --config config/default.yaml \ GENERAL.exp_name evaluate_scannet_testset \ TRAINER.name TwoStreamTrainer \ DATA.dataset ScannetVoxelization2cmDataset \ SCHEDULER.name PolyLR \ TRAINER.epochs 180 \ DATA.batch_size 2 \ DATA.scannet_path ${YOUR_PATH_TO}/scannet_fully_supervised_preprocessed \ DATA.sparse_label False \ DATA.two_stream True \ MODEL.two_stream_model_apply True \ TRAINER.two_stream_seg_both True \ TRAINER.run_scannet_test True \ TRAINER.scannet_testset_output_result_path ${YOUR_PATH_TO_SCANNET_TESTSET_OUTPUT_RESULT} \ MODEL.resume True \ MODEL.resume_path ${YOUR_PATH_TO_RESUME_MODEL}
Since we did not provide the preprocess scannet test set, you can generate preprocess scannet test set by the following instructions here
Thank you for taking the time to respond. Is the operation the same for the S3DIS dataset?
Thank you again.
For S3DIS, we did not directly support test scripts. You can use the
pred
generated by model here. The code about scannet test might also helpful for you to generate S3DIS test results.
@xiaoxunlong Hi, thank you for your nice work! Regarding obtaining the testing performance on the S3DIS dataset, you mentioned using pred = torch.argmax(semantic_scores, dim=1. As I’m a beginner, I find myself unsure about how to obtain semantic_scores. Could you kindly explain how to achieve this, or share the testing scripts for the S3DIS dataset? I would greatly appreciate your help! Thank you!
The semantic_scores come from model, as the code shows here.
The overview of data process pipeline is [S3DIS origin data -> S3DIS preprocessed data -> S3DIS dataset -> Point Cloud Model -> point cloud model outputs semantic_scores -> pred]
if you want to test S3DIS dataset, the code of trainer might be helpful.
For beginner, here is my suggestions. Firstly, download the S3DIS preprocessed dataset which we already provided. Secondly, use the training scripts(you can try the first one, 0.1% baseline) to train a point cloud model. Thirdly, use the evaluation metric to evaluate the model that you just trained. Additionally, the visualization function that we provided here might be helpful to see the semantic results
The semantic_scores come from model, as the code shows here.
The overview of data process pipeline is [S3DIS origin data -> S3DIS preprocessed data -> S3DIS dataset -> Point Cloud Model -> point cloud model outputs semantic_scores -> pred]
if you want to test S3DIS dataset, the code of trainer might be helpful.
For beginner, here is my suggestions. Firstly, download the S3DIS preprocessed dataset which we already provided. Secondly, use the training scripts(you can try the first one, 0.1% baseline) to train a point cloud model. Thirdly, use the evaluation metric to evaluate the model that you just trained. Additionally, the visualization function that we provided here might be helpful to see the semantic results
Thank you so much for your help!
The semantic_scores come from model, as the code shows here.
The overview of data process pipeline is [S3DIS origin data -> S3DIS preprocessed data -> S3DIS dataset -> Point Cloud Model -> point cloud model outputs semantic_scores -> pred]
if you want to test S3DIS dataset, the code of trainer might be helpful.
For beginner, here is my suggestions. Firstly, download the S3DIS preprocessed dataset which we already provided. Secondly, use the training scripts(you can try the first one, 0.1% baseline) to train a point cloud model. Thirdly, use the evaluation metric to evaluate the model that you just trained. Additionally, the visualization function that we provided here might be helpful to see the semantic results
@xiaoxunlong Hello, I really appreciate your helpful and patient guidance for beginners! I successfully trained a point cloud model and generated the corresponding weight file, model_best.pth. However, I encountered some issues when evaluating the model's performance on the test set.
I loaded the model with the following lines: args = get_arguments() config_path = args.config config = CfgNode(CfgNode.load_yaml_with_base(config_path)) model = build_model(config) model.load_state_dict(torch.load('xxx/model_best.pth'), strict=False) model.eval() Next, I used the Area_5 scenes from stanford_fully_supervised_preprocessed as the test set, and made predictions as follows: ret = model(sparse_input, labels) semantic_scores = ret['semantic_scores'] pred = torch.argmax(semantic_scores, dim=1) However, I noticed that even when sparse_input remains the same, the pred results vary each time. It seems that some key elements may not have been successfully loaded during the model loading process, leading to these unstable predictions.
Could you please let me know if this is the correct way to load the model weights? Why do the results differ with the same input, and how can I ensure the model weights are loaded correctly? Thank you very much for taking the time to help me!
The semantic_scores come from model, as the code shows here. The overview of data process pipeline is [S3DIS origin data -> S3DIS preprocessed data -> S3DIS dataset -> Point Cloud Model -> point cloud model outputs semantic_scores -> pred] if you want to test S3DIS dataset, the code of trainer might be helpful. For beginner, here is my suggestions. Firstly, download the S3DIS preprocessed dataset which we already provided. Secondly, use the training scripts(you can try the first one, 0.1% baseline) to train a point cloud model. Thirdly, use the evaluation metric to evaluate the model that you just trained. Additionally, the visualization function that we provided here might be helpful to see the semantic results
@xiaoxunlong Hello, I really appreciate your helpful and patient guidance for beginners! I successfully trained a point cloud model and generated the corresponding weight file, model_best.pth. However, I encountered some issues when evaluating the model's performance on the test set.
I loaded the model with the following lines: args = get_arguments() config_path = args.config config = CfgNode(CfgNode.load_yaml_with_base(config_path)) model = build_model(config) model.load_state_dict(torch.load('xxx/model_best.pth'), strict=False) model.eval() Next, I used the Area_5 scenes from stanford_fully_supervised_preprocessed as the test set, and made predictions as follows: ret = model(sparse_input, labels) semantic_scores = ret['semantic_scores'] pred = torch.argmax(semantic_scores, dim=1) However, I noticed that even when sparse_input remains the same, the pred results vary each time. It seems that some key elements may not have been successfully loaded during the model loading process, leading to these unstable predictions.
Could you please let me know if this is the correct way to load the model weights? Why do the results differ with the same input, and how can I ensure the model weights are loaded correctly? Thank you very much for taking the time to help me!
I wanted to let you know that I have received it, but I am currently tied up with some urgent projects. I will make sure to get back to you with a detailed response in a month. Thank you for your understanding.
Thank you for the update! I understand you're busy and appreciate your time. I'll look forward to your detailed response next month.
Thank you for such a good job. I have a small question. Trained, how to test. Which parameters should be changed?