SamsungLabs / tr3d

[ICIP2023] TR3D: Towards Real-Time Indoor 3D Object Detection
Other
141 stars 8 forks source link

I'm interested in using your model to detect all classes in the SUN RGB-D dataset. #9

Closed LemonWade closed 1 year ago

LemonWade commented 1 year ago

Thank you for your excellent work. I have successfully replicated your results on the SUN RGB-D dataset, and I am currently in the process of replicating your work on the ScanNet v2. I am conducting experiments related to 3D object detection using the Few-shot learning approach. I'm interested in using your model to detect all classes in the SUN RGB-D dataset. Is it possible that I can test the performance of your model just by modifying the following code?

tr3d/tools/data_converter/sunrgbd_data_utils.py

class SUNRGBDData(object):
    """SUNRGBD data.

    Generate scannet infos for sunrgbd_converter.

    Args:
        root_path (str): Root path of the raw data.
        split (str, optional): Set split type of the data. Default: 'train'.
        use_v1 (bool, optional): Whether to use v1. Default: False.
    """

    def __init__(self, root_path, split='train', use_v1=False):
        self.root_dir = root_path
        self.split = split
        self.split_dir = osp.join(root_path, 'sunrgbd_trainval')
        # self.classes = [
        #     'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
        #     'night_stand', 'bookshelf', 'bathtub'
        # ]
        self.classes = ['wall','floor','cabinet','bed','chair','sofa','table','door','window',
               'bookshelf','picture','counter','blinds','desk','shelves',
               'curtain','dresser','pillow','mirror','floor_mat','clothes',
               'ceiling','books','fridge','tv','paper','towel','shower_curtain',
               'box','whiteboard','person','night_stand','toilet','sink','lamp','bathtub','bag'
       ]

Thank you in advance!

filaPro commented 1 year ago

Hi @LemonWade , I think, something like this should work. You also will need to modify this list of classes in SUNRGBDDataset and the num_classes parameter in model config.

LemonWade commented 1 year ago

Thank you for your response and suggestions. I will make the modifications as soon as possible. Thanks again.

LemonWade commented 1 year ago

Thank you for your previous advice. I've made the recommended modifications, but I'm encountering a CUDA error. I consulted with GPT-4, and the feedback was that there seems to be an error with 'label2level=[1, 1, 1, 0, 0, 1, 0, 0, 1, 0]'. I apologize as I'm not very skilled in this area. Could you possibly guide me on how to correct this issue?

    head=dict(
        type='TR3DHead',
        in_channels=256,
        n_reg_outs=8,
        n_classes=10,
        voxel_size=voxel_size,
        assigner=dict(
            type='TR3DAssigner',
            top_pts_threshold=6,
            label2level=[1, 1, 1, 0, 0, 1, 0, 0, 1, 0]),
        bbox_loss=dict(type='RotatedIoU3DLoss', mode='diou', reduction='none')),

Here are the modifications I've made to these three files. tr3d/configs/tr3d/tr3d_sunrgbd-3d-10class.py

class_names = (
            'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
            'night_stand', 'bookshelf', 'bathtub'
            'cabinet', 'door', 'window',
            'picture', 'counter', 'curtain', 'sink', 'garbagebin'
        )

tr3d/mmdet3d/datasets/sunrgbd_dataset.py

    CLASSES = (
            'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
            'night_stand', 'bookshelf', 'bathtub'
            'cabinet', 'door', 'window',
            'picture', 'counter', 'curtain', 'sink', 'garbagebin'
    )

tr3d/tools/data_converter/sunrgbd_data_utils.py

        self.classes = [
            'bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
            'night_stand', 'bookshelf', 'bathtub'
            'cabinet', 'door', 'window',
            'picture', 'counter', 'curtain', 'sink', 'garbagebin'
        ]

I've gone through the labelv1 folder in the SUN RGB-D dataset and found a total of 1140 classes. I've selected 18 classes for experimentation initially. Is it correct if I directly modify the label2level to be the same as in the tr3d/configs/tr3d/tr3d_scannet-3d-18class.py file (label2level=[0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0])? If I choose to include more classes, what rules should I follow when modifying label2level? Thank you in advance.

filaPro commented 1 year ago

Ahaha chatgpt is really impressive. Yes, sorry, this label2level thing is also need to be modified. If you have 18 classes you need 18 zeros or ones in this array. I don't have the correct answer here, but the intuition is as follows: 1 is for ~9 largest of your classes and 0 is for other (=smallest) ~9 of your classes. To analyze the size of the class you can run a loop throw the file with annotations and average all objects of this class. Also you can think about it from your general knowledge like fridge and bed are large and picture and tv are small.

LemonWade commented 1 year ago

Thank you for your quick response. The program is now running, and I will share the results with you as soon as they are available.

guoqingyin commented 1 year ago

Ahaha chatgpt is really impressive. Yes, sorry, this label2level thing is also need to be modified. If you have 18 classes you need 18 zeros or ones in this array. I don't have the correct answer here, but the intuition is as follows: 1 is for ~9 largest of your classes and 0 is for other (=smallest) ~9 of your classes. To analyze the size of the class you can run a loop throw the file with annotations and average all objects of this class. Also you can think about it from your general knowledge like fridge and bed are large and picture and tv are small.

https://rgbd.cs.princeton.edu/supp.pdf Actually, we have Object 3D size distribution, see page 5

filaPro commented 1 year ago

Great, didn't know it)

LemonWade commented 1 year ago

Thank you for your response. Here is a portion of the log when the code runs to the third epoch.

bbox_loss: 0.3723, cls_loss: 0.2349, loss: 0.6073, grad_norm: 1.7113
bbox_loss: 0.3869, cls_loss: 0.2698, loss: 0.6567, grad_norm: 2.2727
mmdet - INFO - Saving checkpoint at 3 epochs
mmdet - INFO - 
+-------------+---------+---------+---------+---------+
| classes     | AP_0.25 | AR_0.25 | AP_0.50 | AR_0.50 |
+-------------+---------+---------+---------+---------+
| bed         | 0.8183  | 0.8563  | 0.6350  | 0.6951  |
| dresser     | 0.3230  | 0.4633  | 0.2318  | 0.3716  |
| night_stand | 0.5179  | 0.7255  | 0.3977  | 0.6196  |
| bookshelf   | 0.2810  | 0.5390  | 0.0836  | 0.2411  |
| door        | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| picture     | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| sofa        | 0.6026  | 0.6699  | 0.5176  | 0.5901  |
| desk        | 0.2882  | 0.5032  | 0.1283  | 0.3071  |
| table       | 0.4643  | 0.6214  | 0.2927  | 0.4446  |
| chair       | 0.7300  | 0.7743  | 0.6165  | 0.6871  |
| counter     | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| sink        | 0.1614  | 0.2943  | 0.0155  | 0.0755  |
| toilet      | 0.8717  | 0.9103  | 0.6953  | 0.7931  |
| window      | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
| curtain     | 0.0000  | 0.0000  | 0.0000  | 0.0000  |
+-------------+---------+---------+---------+---------+
| Overall     | 0.3372  | 0.4238  | 0.2409  | 0.3217  |
+-------------+---------+---------+---------+---------+

I've noticed that some classes constantly register as zero in the validation set, and the number of classes doesn't match the 18 that I specified. This is puzzling to me.

The file you provided made me wonder if certain categories like 'window' and 'curtain' may not exist in this database. Perhaps my method of going through the labelv1 folder in the SUN RGB-D database to collect class names was incorrect. I plan to modify the 'classes' in these four files, regenerate the data, and retrain the model."

Thank you once again for your prompt response. I will complete the experiment as soon as possible.

guoqingyin commented 1 year ago

I'm doing the same experiment. This is because your pkl file, sunrgbd_infos_train.pkl which contains all the annotations, does not contain the new class you added. You need to re-run create_data.py @LemonWade

filaPro commented 1 year ago

Yes looks like that non-zero scores are only for old 10 classes. Please, check the presence of other 8 in validation and training .pkl files. Also please attach your config and log files.

LemonWade commented 1 year ago

Following your advice and suggestions, I've made changes, regenerated the data, and checked the .pkl file. I've confirmed the existence of indices for all 29 classes. Below are my config file and log

2023-06-15 03:24:13,649 - mmdet - INFO - Epoch [5][50/1652] lr: 1.000e-03, eta: 2:04:45, time: 0.641, data_time: 0.096, memory: 3411, bbox_loss: 0.4651, cls_loss: 0.3123, loss: 0.7774, grad_norm: 2.9510
2023-06-15 03:24:42,055 - mmdet - INFO - Epoch [5][100/1652]    lr: 1.000e-03, eta: 2:04:17, time: 0.568, data_time: 0.024, memory: 3411, bbox_loss: 0.4629, cls_loss: 0.3133, loss: 0.7761, grad_norm: 3.0145
2023-06-15 03:25:10,516 - mmdet - INFO - Epoch [5][150/1652]    lr: 1.000e-03, eta: 2:03:48, time: 0.569, data_time: 0.024, memory: 3411, bbox_loss: 0.4572, cls_loss: 0.3107, loss: 0.7679, grad_norm: 2.6357
2023-06-15 03:25:39,276 - mmdet - INFO - Epoch [5][200/1652]    lr: 1.000e-03, eta: 2:03:21, time: 0.575, data_time: 0.023, memory: 3411, bbox_loss: 0.4619, cls_loss: 0.3125, loss: 0.7744, grad_norm: 2.5914
2023-06-15 03:26:07,970 - mmdet - INFO - Epoch [5][250/1652]    lr: 1.000e-03, eta: 2:02:53, time: 0.574, data_time: 0.020, memory: 3411, bbox_loss: 0.4694, cls_loss: 0.3222, loss: 0.7916, grad_norm: 2.4225
2023-06-15 03:26:36,237 - mmdet - INFO - Epoch [5][300/1652]    lr: 1.000e-03, eta: 2:02:24, time: 0.565, data_time: 0.024, memory: 3544, bbox_loss: 0.4609, cls_loss: 0.3078, loss: 0.7687, grad_norm: 2.6713
2023-06-15 03:27:04,274 - mmdet - INFO - Epoch [5][350/1652]    lr: 1.000e-03, eta: 2:01:55, time: 0.561, data_time: 0.024, memory: 3544, bbox_loss: 0.4629, cls_loss: 0.3103, loss: 0.7733, grad_norm: 2.6457
2023-06-15 03:27:32,682 - mmdet - INFO - Epoch [5][400/1652]    lr: 1.000e-03, eta: 2:01:26, time: 0.568, data_time: 0.024, memory: 3544, bbox_loss: 0.4605, cls_loss: 0.3085, loss: 0.7690, grad_norm: 2.4223
2023-06-15 03:28:00,920 - mmdet - INFO - Epoch [5][450/1652]    lr: 1.000e-03, eta: 2:00:58, time: 0.565, data_time: 0.024, memory: 3544, bbox_loss: 0.4625, cls_loss: 0.3165, loss: 0.7790, grad_norm: 2.4785
2023-06-15 03:28:29,413 - mmdet - INFO - Epoch [5][500/1652]    lr: 1.000e-03, eta: 2:00:29, time: 0.570, data_time: 0.024, memory: 3544, bbox_loss: 0.4659, cls_loss: 0.3175, loss: 0.7834, grad_norm: 2.6832
2023-06-15 03:28:58,073 - mmdet - INFO - Epoch [5][550/1652]    lr: 1.000e-03, eta: 2:00:01, time: 0.573, data_time: 0.023, memory: 3544, bbox_loss: 0.4598, cls_loss: 0.3137, loss: 0.7735, grad_norm: 2.4641
2023-06-15 03:29:26,570 - mmdet - INFO - Epoch [5][600/1652]    lr: 1.000e-03, eta: 1:59:33, time: 0.570, data_time: 0.022, memory: 3544, bbox_loss: 0.4596, cls_loss: 0.3102, loss: 0.7698, grad_norm: 2.4267
2023-06-15 03:29:55,292 - mmdet - INFO - Epoch [5][650/1652]    lr: 1.000e-03, eta: 1:59:05, time: 0.574, data_time: 0.022, memory: 3544, bbox_loss: 0.4610, cls_loss: 0.3190, loss: 0.7800, grad_norm: 2.7117
2023-06-15 03:30:24,030 - mmdet - INFO - Epoch [5][700/1652]    lr: 1.000e-03, eta: 1:58:37, time: 0.575, data_time: 0.024, memory: 3544, bbox_loss: 0.4549, cls_loss: 0.3039, loss: 0.7587, grad_norm: 2.7952
2023-06-15 03:30:52,470 - mmdet - INFO - Epoch [5][750/1652]    lr: 1.000e-03, eta: 1:58:09, time: 0.569, data_time: 0.024, memory: 3544, bbox_loss: 0.4571, cls_loss: 0.3115, loss: 0.7686, grad_norm: 2.4918
2023-06-15 03:31:21,037 - mmdet - INFO - Epoch [5][800/1652]    lr: 1.000e-03, eta: 1:57:40, time: 0.571, data_time: 0.024, memory: 3544, bbox_loss: 0.4633, cls_loss: 0.3190, loss: 0.7823, grad_norm: 2.3813
2023-06-15 03:31:49,140 - mmdet - INFO - Epoch [5][850/1652]    lr: 1.000e-03, eta: 1:57:11, time: 0.562, data_time: 0.021, memory: 3544, bbox_loss: 0.4511, cls_loss: 0.3006, loss: 0.7517, grad_norm: 2.6739
2023-06-15 03:32:17,751 - mmdet - INFO - Epoch [5][900/1652]    lr: 1.000e-03, eta: 1:56:43, time: 0.572, data_time: 0.023, memory: 3544, bbox_loss: 0.4501, cls_loss: 0.3089, loss: 0.7590, grad_norm: 2.6701
2023-06-15 03:32:46,445 - mmdet - INFO - Epoch [5][950/1652]    lr: 1.000e-03, eta: 1:56:15, time: 0.574, data_time: 0.021, memory: 3544, bbox_loss: 0.4568, cls_loss: 0.3006, loss: 0.7574, grad_norm: 2.5393
2023-06-15 03:33:14,562 - mmdet - INFO - Epoch [5][1000/1652]   lr: 1.000e-03, eta: 1:55:46, time: 0.562, data_time: 0.023, memory: 3544, bbox_loss: 0.4547, cls_loss: 0.3087, loss: 0.7634, grad_norm: 2.4555
2023-06-15 03:33:42,625 - mmdet - INFO - Epoch [5][1050/1652]   lr: 1.000e-03, eta: 1:55:17, time: 0.561, data_time: 0.024, memory: 3544, bbox_loss: 0.4549, cls_loss: 0.3041, loss: 0.7590, grad_norm: 2.5187
2023-06-15 03:34:11,470 - mmdet - INFO - Epoch [5][1100/1652]   lr: 1.000e-03, eta: 1:54:50, time: 0.577, data_time: 0.023, memory: 3544, bbox_loss: 0.4630, cls_loss: 0.3064, loss: 0.7694, grad_norm: 2.6876
2023-06-15 03:34:39,914 - mmdet - INFO - Epoch [5][1150/1652]   lr: 1.000e-03, eta: 1:54:21, time: 0.569, data_time: 0.024, memory: 3544, bbox_loss: 0.4661, cls_loss: 0.3070, loss: 0.7730, grad_norm: 2.7863
2023-06-15 03:35:07,857 - mmdet - INFO - Epoch [5][1200/1652]   lr: 1.000e-03, eta: 1:53:52, time: 0.559, data_time: 0.023, memory: 3544, bbox_loss: 0.4562, cls_loss: 0.3087, loss: 0.7649, grad_norm: 2.6824
2023-06-15 03:35:35,967 - mmdet - INFO - Epoch [5][1250/1652]   lr: 1.000e-03, eta: 1:53:23, time: 0.562, data_time: 0.023, memory: 3544, bbox_loss: 0.4474, cls_loss: 0.3044, loss: 0.7518, grad_norm: 2.6275
2023-06-15 03:36:04,814 - mmdet - INFO - Epoch [5][1300/1652]   lr: 1.000e-03, eta: 1:52:55, time: 0.577, data_time: 0.022, memory: 3544, bbox_loss: 0.4610, cls_loss: 0.3142, loss: 0.7752, grad_norm: 2.5686
2023-06-15 03:36:33,243 - mmdet - INFO - Epoch [5][1350/1652]   lr: 1.000e-03, eta: 1:52:27, time: 0.569, data_time: 0.022, memory: 3544, bbox_loss: 0.4592, cls_loss: 0.2996, loss: 0.7588, grad_norm: 2.6789
2023-06-15 03:37:01,956 - mmdet - INFO - Epoch [5][1400/1652]   lr: 1.000e-03, eta: 1:51:59, time: 0.574, data_time: 0.024, memory: 3544, bbox_loss: 0.4499, cls_loss: 0.3031, loss: 0.7530, grad_norm: 2.7653
2023-06-15 03:37:30,791 - mmdet - INFO - Epoch [5][1450/1652]   lr: 1.000e-03, eta: 1:51:31, time: 0.577, data_time: 0.024, memory: 3544, bbox_loss: 0.4508, cls_loss: 0.3067, loss: 0.7575, grad_norm: 2.4384
2023-06-15 03:37:59,356 - mmdet - INFO - Epoch [5][1500/1652]   lr: 1.000e-03, eta: 1:51:03, time: 0.571, data_time: 0.024, memory: 3544, bbox_loss: 0.4592, cls_loss: 0.3018, loss: 0.7610, grad_norm: 2.6049
2023-06-15 03:38:27,737 - mmdet - INFO - Epoch [5][1550/1652]   lr: 1.000e-03, eta: 1:50:34, time: 0.568, data_time: 0.022, memory: 3544, bbox_loss: 0.4477, cls_loss: 0.2970, loss: 0.7447, grad_norm: 2.5310
2023-06-15 03:38:56,555 - mmdet - INFO - Epoch [5][1600/1652]   lr: 1.000e-03, eta: 1:50:06, time: 0.576, data_time: 0.024, memory: 3544, bbox_loss: 0.4519, cls_loss: 0.3044, loss: 0.7563, grad_norm: 2.2959
2023-06-15 03:39:25,467 - mmdet - INFO - Epoch [5][1650/1652]   lr: 1.000e-03, eta: 1:49:38, time: 0.578, data_time: 0.024, memory: 3544, bbox_loss: 0.4521, cls_loss: 0.2950, loss: 0.7471, grad_norm: 2.4587
2023-06-15 03:39:27,102 - mmdet - INFO - Saving checkpoint at 5 epochs
2023-06-15 03:52:19,503 - mmdet - INFO - 
+-------------+---------+---------+---------+---------+
| classes     | AP_0.25 | AR_0.25 | AP_0.50 | AR_0.50 |
+-------------+---------+---------+---------+---------+
| bed         | 0.8492  | 0.9942  | 0.6032  | 0.7709  |
| table       | 0.5534  | 0.9740  | 0.3379  | 0.6934  |
| sofa        | 0.6919  | 0.9841  | 0.5711  | 0.8070  |
| chair       | 0.8208  | 0.9731  | 0.6736  | 0.8390  |
| toilet      | 0.9018  | 1.0000  | 0.6541  | 0.7862  |
| desk        | 0.3759  | 0.9410  | 0.1201  | 0.5223  |
| dresser     | 0.4020  | 0.9312  | 0.3203  | 0.7798  |
| night_stand | 0.7291  | 0.9961  | 0.5886  | 0.8627  |
| bookshelf   | 0.3277  | 0.9113  | 0.0748  | 0.3723  |
| bathtub     | 0.7036  | 0.9796  | 0.4413  | 0.6327  |
| box         | 0.0386  | 0.6371  | 0.0095  | 0.1933  |
| counter     | 0.0574  | 0.7500  | 0.0206  | 0.2938  |
| door        | 0.0062  | 0.4059  | 0.0000  | 0.0294  |
| garbage_bin | 0.4471  | 0.9377  | 0.2795  | 0.6117  |
| lamp        | 0.4294  | 0.9252  | 0.2296  | 0.5180  |
| sink        | 0.5374  | 0.9208  | 0.1015  | 0.3321  |
| cabinet     | 0.0726  | 0.8964  | 0.0251  | 0.4301  |
| window      | 0.0003  | 0.6667  | 0.0000  | 0.0667  |
| picture     | 0.0031  | 0.4436  | 0.0000  | 0.0226  |
| blinds      | 0.0472  | 0.3182  | 0.0000  | 0.0000  |
| curtain     | 0.0235  | 0.6032  | 0.0001  | 0.0476  |
| pillow      | 0.2627  | 0.8598  | 0.0671  | 0.4124  |
| mirror      | 0.0200  | 0.4493  | 0.0000  | 0.0290  |
| books       | 0.0051  | 0.6049  | 0.0002  | 0.1605  |
| fridge      | 0.2122  | 0.9426  | 0.1322  | 0.5738  |
| tv          | 0.2617  | 0.8448  | 0.0458  | 0.3448  |
| paper       | 0.0003  | 0.0721  | 0.0000  | 0.0090  |
| towel       | 0.1284  | 0.6667  | 0.0255  | 0.1167  |
| bag         | 0.0172  | 0.7865  | 0.0059  | 0.3258  |
+-------------+---------+---------+---------+---------+
| Overall     | 0.3078  | 0.7730  | 0.1837  | 0.3994  |
+-------------+---------+---------+---------+---------+
voxel_size = .01
n_points = 100000

model = dict(
    type='MinkSingleStage3DDetector',
    voxel_size=voxel_size,
    backbone=dict(type='MinkResNet', in_channels=3, depth=34, max_channels=128, norm='batch'),
    neck=dict(
        type='TR3DNeck',
        in_channels=(64, 128, 128, 128),
        out_channels=128),
    head=dict(
        type='TR3DHead',
        in_channels=128,
        n_reg_outs=8,
        n_classes=29,
        voxel_size=voxel_size,
        assigner=dict(
            type='TR3DAssigner',
            top_pts_threshold=6,
            label2level=[1, 1, 1, 0, 0, 1, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0]),
        bbox_loss=dict(type='RotatedIoU3DLoss', mode='diou', reduction='none')),
    train_cfg=dict(),
    test_cfg=dict(nms_pre=1000, iou_thr=.5, score_thr=.01))

optimizer = dict(type='AdamW', lr=.001, weight_decay=.0001)
optimizer_config = dict(grad_clip=dict(max_norm=10, norm_type=2))
lr_config = dict(policy='step', warmup=None, step=[8, 11])
runner = dict(type='EpochBasedRunner', max_epochs=12)
custom_hooks = [dict(type='EmptyCacheHook', after_iter=True)]

checkpoint_config = dict(interval=1, max_keep_ckpts=1)
log_config = dict(
    interval=50,
    hooks=[
        dict(type='TextLoggerHook'),
        # dict(type='TensorboardLoggerHook')
])
dist_params = dict(backend='nccl')
log_level = 'INFO'
work_dir = None
load_from = "/data/zzy/tr3d/work_dirs/tr3d_sunrgbd-3d-10class-paper/epoch_6.pth"
# load_from = None
resume_from = None
workflow = [('train', 1)]

dataset_type = 'SUNRGBDDataset'
data_root = 'data/sunrgbd/'
# class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
#                'night_stand', 'bookshelf', 'bathtub')
class_names = ('bed', 'table', 'sofa', 'chair', 'toilet', 'desk', 'dresser',
            'night_stand', 'bookshelf', 'bathtub', 'box', 'counter', 'door',
            'garbage_bin', 'lamp', 'sink',  'cabinet', 'window',
            'picture', 'blinds', 'curtain', 'pillow', 'mirror', 
            'books', 'fridge', 'tv', 'paper', 'towel', 'bag')
train_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='DEPTH',
        shift_height=False,
        use_color=True,
        load_dim=6,
        use_dim=[0, 1, 2, 3, 4, 5]),
    dict(type='LoadAnnotations3D'),
    dict(type='PointSample', num_points=n_points),
    dict(
        type='RandomFlip3D',
        sync_2d=False,
        flip_ratio_bev_horizontal=.5,
        flip_ratio_bev_vertical=.0),
    dict(
        type='GlobalRotScaleTrans',
        rot_range=[-.523599, .523599],
        scale_ratio_range=[.85, 1.15],
        translation_std=[.1, .1, .1],
        shift_height=False),
    # dict(type='NormalizePointsColor', color_mean=None),
    dict(type='DefaultFormatBundle3D', class_names=class_names),
    dict(type='Collect3D', keys=['points', 'gt_bboxes_3d', 'gt_labels_3d'])
]
test_pipeline = [
    dict(
        type='LoadPointsFromFile',
        coord_type='DEPTH',
        shift_height=False,
        use_color=True,
        load_dim=6,
        use_dim=[0, 1, 2, 3, 4, 5]),
    dict(
        type='MultiScaleFlipAug3D',
        img_scale=(1333, 800),
        pts_scale_ratio=1,
        flip=False,
        transforms=[
            dict(type='PointSample', num_points=n_points),
            # dict(type='NormalizePointsColor', color_mean=None),
            dict(
                type='DefaultFormatBundle3D',
                class_names=class_names,
                with_label=False),
            dict(type='Collect3D', keys=['points'])
        ])
]
data = dict(
    samples_per_gpu=16,
    workers_per_gpu=4,
    train=dict(
        type='RepeatDataset',
        times=5,
        dataset=dict(
            type=dataset_type,
            modality=dict(use_camera=False, use_lidar=True),
            data_root=data_root,
            ann_file=data_root + 'sunrgbd_infos_train.pkl',
            pipeline=train_pipeline,
            filter_empty_gt=False,
            classes=class_names,
            box_type_3d='Depth')),
    val=dict(
        type=dataset_type,
        modality=dict(use_camera=False, use_lidar=True),
        data_root=data_root,
        ann_file=data_root + 'sunrgbd_infos_val.pkl',
        pipeline=test_pipeline,
        classes=class_names,
        test_mode=True,
        box_type_3d='Depth'),
    test=dict(
        type=dataset_type,
        modality=dict(use_camera=False, use_lidar=True),
        data_root=data_root,
        ann_file=data_root + 'sunrgbd_infos_val.pkl',
        pipeline=test_pipeline,
        classes=class_names,
        test_mode=True,
        box_type_3d='Depth'))

My computer unexpectedly restarted, but I am now resuming from the checkpoint. I think I should be able to succeed this time. Thank you very much for your enthusiastic responses. I will continue my experiments and then aim to check all the classes in ScanNet.I hope you'll be willing to answer my questions again in the near future. Thanks once again. @filaPro @guoqingyin

jiachen0212 commented 5 months ago

What's amazing is that my detection categories have 24 categories. Based on the scannet18 category, I randomly added 3 0s and 3 1s. Then the train was successful. I want to know, the order of adding these 0s and 1s, exactly Does it support randomization?

assigner=dict( type='TR3DAssigner', top_pts_threshold=6, label2level=[0, 1, 0, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 1, 0,1,0,0,1,1,0]), # 直接无脑加0 1 啊 没啥循序规律可言???

filaPro commented 5 months ago

It should work with all zeros, all ones, or all random. But I believe the best results are if half of objects with large sizes have 1 and half with smallest sizes have 0.