Vegeta2020 / SE-SSD

SE-SSD: Self-Ensembling Single-Stage Object Detector From Point Cloud, CVPR 2021.
Apache License 2.0
811 stars 128 forks source link

Validation loss #74

Closed grablerm closed 2 years ago

grablerm commented 2 years ago

Hi, thanks for publishing your code. I do have a question regarding the validation loss. Is this calculated in the validation stage of the training process? i tried to set the parameter return_loss=True in https://github.com/Vegeta2020/SE-SSD/blob/master/tools/test.py#L79 but i got the following error:

Traceback (most recent call last): File "tools/train.py", line 118, in main() File "tools/train.py", line 115, in main train_detector(model, datasets, cfg, distributed=distributed, validate=args.validate, logger=logger,) File "/src/det3d/torchie/apis/train.py", line 348, in train_detector trainer.run(data_loaders, cfg.workflow, cfg.total_epochs, local_rank=cfg.local_rank) File "/src/det3d/torchie/trainer/trainer.py", line 418, in run epoch_runner(data_loaders[1], kwargs) File "/src/det3d/torchie/trainer/trainer.py", line 326, in val outputs = self.batch_processor_inline(self.model, data_batch, train_mode = False, kwargs) File "/src/det3d/torchie/trainer/trainer.py", line 255, in batch_processor_inline return model(example, return_loss=True) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/src/det3d/models/detectors/voxelnet_pillar.py", line 39, in forward return self.bbox_head.loss(example, preds) File "/src/det3d/models/bbox_heads/mg_head_ciassd.py", line 579, in loss labels = example["labels"][task_id] # cls_labels: [batch_size, 70400], elem in [-1, 0, 1]. KeyError: 'labels'

is there a possibility to get the validation loss?

Vegeta2020 commented 2 years ago

Sure, but you may have to read labels for your val samples (as your traceback indicated), which means you have to revise the val dataset configuration in the config file, and it may also need to do some modification on the formating & preprocess in the dataset/pipeline.