facebookresearch / 3detr

Code & Models for 3DETR - an End-to-end transformer model for 3D object detection
Apache License 2.0
629 stars 79 forks source link

Problems with training on my own datasets #28

Open liudaxia96 opened 2 years ago

liudaxia96 commented 2 years ago

Dear author: At present, I have a batch of data sets to do 3D point cloud detection called center,which does not need to be classified, but only need to identify its location. I generated training and testing files including xxbbox.npy, xxpc.npz, votes.npz according to the format of sunrgbd data.The modifications in the dataset configuration file are as follows: 1

The parameters set during training are as follows: --dataset_name center\ --max_epoch 90 \ --nqueries 128 \ --base_lr 7e-4 \ --matcher_giou_cost 3 \ --matcher_cls_cost 1 \ --matcher_center_cost 5 \ --matcher_objectness_cost 5 \ --loss_giou_weight 0 \ --loss_no_object_weight 0.1 \ --save_separate_checkpoint_every_epoch -1 \ --checkpoint_dir outputs/certer_90

The following problems occurred during training: image

Do you know the possible causes of the problem?Look forward to your answer.

liudaxia96 commented 2 years ago

Parameter setting supplement: max_epoch :50 dataset_num_workers:1 batchsize_per_gpu:1

imisra commented 2 years ago

num_semcls needs to be the number of classes you have (1). It is set to 10 in your code snippet.

liudaxia96 commented 2 years ago

yes! I also noticed this problem. What does "num_angle_bin " this parameter represent?

liudaxia96 commented 2 years ago

I have preliminarily trained 90 epochs with a small amount of data, including 32 training sets and 8 test sets. The result is very poor: loss is about 25. The accuracy and recall rate when the threshold is 0.25 and 0.5 are as follows: image

If I want to improve the accuracy and recall rate, in addition to increasing the amount of data in the training and verification set, what parameters need to be adjusted? Or is there anything I need to pay attention to in training my own dataset?

imisra commented 2 years ago

num_angle_bin is used for datasets like SUN RGB-D when the boxes are rotated. For ScanNet, the boxes are axis-aligned, so the angle part is not used. So, if your boxes are not rotated, I would suggest using the ScanNet settings.

When you say 32/8 sets do you mean samples? So your training set has 32 point clouds each with box annotations? In this case, you should be able to overfit quite well on the training set at least. The mAP and Recall on the training set should be pretty high. In the above screenshot, are you showing the test set AP?

To improve performance, I would try to debug which loss is highest - is it a problem in predicting the class or the location of the bounding box. If it is location - you can debug whether its the center position, box dimensions or (if applicable) box rotation. The other parameter which might be worth changing, since you only have 1 object class is https://github.com/facebookresearch/3detr/blob/main/main.py#L96 - it balances background/foreground loss weight. Increasing/decreasing it might be helpful depending on how many boxes you have per point cloud.

liudaxia96 commented 2 years ago

Thank you very mach for your patience. I'll try to revise the code according to your suggestions and look forward to a good result.

liudaxia96 commented 2 years ago

Hello, author. The following problems always occur when I test the model. Do you know the reason?

image

imisra commented 2 years ago

Hi @liudaxia96

I haven't seen this error. It seems to be raise in tensorboardX. Could you try running the code without tensorboard logging?

madinwei commented 1 year ago

Hello @imisra @liudaxia96, First Thanks to the Author for this awesome work. If you don't mind guide me or give me some help on how to use my own data. how to integrate it with one class and with multiple classes. I have prepared my data in Sunrgbd format and I have used it with Votenet. I want to use the 3detr mode on my data and then make an inference function for real-time. to test if 3detr will perform better in real-time.

thank you in advance.

LUJUNYIhhh commented 1 year ago

I have preliminarily trained 90 epochs with a small amount of data, including 32 training sets and 8 test sets. The result is very poor: loss is about 25. The accuracy and recall rate when the threshold is 0.25 and 0.5 are as follows: image

If I want to improve the accuracy and recall rate, in addition to increasing the amount of data in the training and verification set, what parameters need to be adjusted? Or is there anything I need to pay attention to in training my own dataset?

Hi, I also come across the same probelm, have you solved it now? I am very worried about it. Thanks in advance.