OpenMMLab Video Perception Toolbox. It supports Video Object Detection (VID), Multiple Object Tracking (MOT), Single Object Tracking (SOT), Video Instance Segmentation (VIS) with a unified framework.
RuntimeError: cannot reshape tensor of 0 elements into shape [0, 16, -1] because the unspecified dimension size -1 can be any value and is ambiguous #387
Traceback (most recent call last):
File "tools/train.py", line 182, in <module>
main()
File "tools/train.py", line 178, in main
meta=meta)
File "/home/featurize/work/mmtracking/mmtrack/apis/train.py", line 175, in train_model
runner.run(data_loaders, cfg.workflow, cfg.total_epochs)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 127, in run
epoch_runner(data_loaders[i], **kwargs)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 50, in train
self.run_iter(data_batch, train_mode=True, **kwargs)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 30, in run_iter
**kwargs)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/parallel/data_parallel.py", line 75, in train_step
return self.module.train_step(*inputs[0], **kwargs[0])
File "/home/featurize/work/mmtracking/mmtrack/models/vid/base.py", line 265, in train_step
losses = self(**data)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func
return old_func(*args, **kwargs)
File "/home/featurize/work/mmtracking/mmtrack/models/vid/base.py", line 194, in forward
**kwargs)
File "/home/featurize/work/mmtracking/mmtrack/models/vid/selsa.py", line 166, in forward_train
gt_labels, gt_bboxes_ignore, gt_masks, **kwargs)
File "/home/featurize/work/mmtracking/mmtrack/models/roi_heads/selsa_roi_head.py", line 66, in forward_train
gt_bboxes, gt_labels)
File "/home/featurize/work/mmtracking/mmtrack/models/roi_heads/selsa_roi_head.py", line 104, in _bbox_forward_train
bbox_results = self._bbox_forward(x, ref_x, rois, ref_rois)
File "/home/featurize/work/mmtracking/mmtrack/models/roi_heads/selsa_roi_head.py", line 93, in _bbox_forward
cls_score, bbox_pred = self.bbox_head(bbox_feats, ref_bbox_feats)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/featurize/work/mmtracking/mmtrack/models/roi_heads/bbox_heads/selsa_bbox_head.py", line 57, in forward
x = x + self.aggregator[i](x, ref_x)
File "/environment/miniconda3/envs/open-mmlab/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/featurize/work/mmtracking/mmtrack/models/aggregators/selsa_aggregator.py", line 62, in forward
-1).permute(1, 2, 0)
RuntimeError: cannot reshape tensor of 0 elements into shape [0, 16, -1] because the unspecified dimension size -1 can be any value and is ambiguous
在ILSVRC2017上训练一个视频目标检测模型,在执行这句命令时
报错
环境依赖,完全按照官方文档进行
并跑通了最后的验证
数据集方面,下载了ILSVRC2017的所有包
下载了文档中附带的Lists文件夹的四个txt
使用文档中给出的转换脚本,转化为COCOVID格式
我不太确定是不是数据集的问题,如果要运行VID任务,究竟需要下载哪几个文件?
如果不是数据集的问题,请尝试复现并解决这个问题
最后附上我的conda环境,是一个yml文件,使用
conda env export > my-environment.yml
创建可以使用
conda env create -f my-environment.yml
复制我的环境谢谢