dvlab-research / Mask-Attention-Free-Transformer

Official Implementation for "Mask-Attention-Free Transformer for 3D Instance Segmentation"
59 stars 6 forks source link

The normal information in training #3

Closed RongkunYang closed 1 year ago

RongkunYang commented 1 year ago

Hi, XinLai, thank you for you great work.

  1. I found that the code supports the sstnet pretrain weight, if I want to train with the normal information, could the pretrain backbone be released, or the backbone is trained the same as sstnet, just need to change the input channel into 9?
  2. I'd like to know more about the S3DIS training too. Thank you!
X-Lai commented 1 year ago

Thanks for your interest in our work.

  1. Yes, you're correct.
  2. We use another codebase (i.e., Mask3D) for S3DIS training, and we will release it soon. Please stay tuned.
RongkunYang commented 1 year ago

OK, thank you for your fast response, I also meet another problem, when I trained the code following the below setting:

  1. sstnet pretrained weight.
  2. pytorch 12.1, python3.7, nvidia 3090 GPU.
  3. the default config file, point cloud input features including xyz and rgb. The converge curve below: image

The best results achieved AP = 57.1, lower than the 58.4. May I ask if there are other details that I need pay attention to?

triton99 commented 1 year ago

I also trained the model again. The best result was 57.6 AP, lower than 58.4 AP in the paper.

Screenshot 2023-09-09 at 10 08 53
RongkunYang commented 1 year ago

@triton99 May I ask how is your experiment setting? I'm also wondering why the validation results during training process of mine seem to be unstable, thank you.

triton99 commented 1 year ago

I use default config, sstnet pretrained ckpt from SPFormer, and a V100 GPU.

X-Lai commented 1 year ago

Thanks for your interest in our work. The validation results really fluctuate too much (it depends on many factors including training environment, randomness of some operations, etc.). Typically, the variance within one point mAP is acceptable.