VinAIResearch / ISBNet

ISBNet: a 3D Point Cloud Instance Segmentation Network with Instance-aware Sampling and Box-aware Dynamic Convolution (CVPR 2023)
Apache License 2.0
115 stars 23 forks source link

Training my own dataset, metrics like AP are too low at train all. #32

Closed huanghuang113 closed 1 year ago

huanghuang113 commented 1 year ago

Hi, first of all thank you very much for your great contribution in the direction of point cloud segmentation. I processed my own data files according to stpls3d, but my data files are smaller, they are plant point cloud data, so I processed them with both train and val_250m torch.save((coords, colors, sem_labels, instance_labels) , pth_file) saves them as .pth files without chunking them. When training my backbone, I only did 12 epochs and here are the results of my backbone run. image

But when I do the TRAIN ALL, the result is not very good, the evaluation at the end of the first epoch is too bad as below: image

And after 12 epochs, the results are not very good, as shown below: image

Here are my config file settings: model: channels: 16 num_blocks: 7 semantic_classes: 2 instance_classes: 2 sem2ins_classes: [] semantic_only: False semantic_weight: [1.0, 1.0] with_coords: False ignore_label: -100 voxel_scale: 50 use_spp_pool: False filter_bg_thresh: 0.1 iterative_sampling: False mask_dim_out: 16 instance_head_cfg: dec_dim: 64 n_sample_pa1: 2048 n_queries: 256 radius_scale: 10 radius: 0.04 neighbor: 16 test_cfg: x4_split: False logit_thresh: 0.0 score_thresh: 0.2 npoint_thresh: 10 type_nms: 'matrix' topk: 100

fixed_modules: ['input_conv', 'unet', 'output_layer', 'semantic_linear', 'offset_linear', 'offset_vertices_linear', 'box_conf_linear']

data: train: type: 'plant' data_root: 'dataset/plant' prefix: 'train1' suffix: '.pth' training: True repeat: 3 voxel_cfg: scale: 50 spatial_shape: [128, 512] max_npoint: 250000 min_npoint: 5000 test: type: 'plant' data_root: 'dataset/plant' prefix: 'val_250m1' suffix: '.pth' training: False voxel_cfg: scale: 50 spatial_shape: [128, 512] max_npoint: 250000 min_npoint: 5000

dataloader: train: batch_size: 8 num_workers: 8 test: batch_size: 1 num_workers: 1

optimizer: type: 'AdamW' lr: 0.001 weight_decay: 0.0001

save_cfg: semantic: False offset: False instance: True offset_vertices: False nmc_clusters: False object_conditions: False

fp16: True epochs: 12 step_epoch: 4 save_freq: 4 pretrain: 'work_dirs/plant/isbnet_backbone_plant/exp3/latest.pth' work_dir: ''

Can you give me some advice please, thank you very much!

Endvour commented 11 months ago

have you solved the problem?

Lizhinwafu commented 3 months ago

I also have this question.

huanghuang113 commented 3 months ago

I also have this question.

Train for a few more epochs and you'll be fine, the accuracy will get higher at about the 20th-30th epoch.

Lizhinwafu commented 3 months ago

I also have this question.

Train for a few more epochs and you'll be fine, the accuracy will get higher at about the 20th-30th epoch.

image I train softgroup model, 100 epoch. one class is 0.000.

huanghuang113 commented 3 months ago

Maybe you didn't set the (def get_instances) method correctly.

Lizhinwafu commented 3 months ago

I didn't change the raw code. Do you know which code Line need change?

huanghuang113 commented 3 months ago

I was using ISBNet by modifying it here (isbnet/model/isbnet.py/get_instance)

Lizhinwafu commented 3 months ago

Thanks

------------------ Original ------------------ From: huanghuang113 @.> Date: Mon,Jun 17,2024 11:12 PM To: VinAIResearch/ISBNet @.> Cc: Lizhi Jiang @.>, Comment @.> Subject: Re: [VinAIResearch/ISBNet] Training my own dataset, metrics like APare too low at train all. (Issue #32)

I was using ISBNet by modifying it here (isbnet/model/isbnet.py/get_instance)

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>