Hello! I used the train_ssd.py before with one class and everything worked fine. Now I have a two class dataset and modified the train_ssd.py this way:
1) Added class
net_name = '_'.join(('ssd', str(args.data_shape), args.network, 'custom'))
args.save_prefix += net_name
if args.syncbn and len(ctx) > 1:
net = get_model(net_name, pretrained_base=True, norm_layer=gluon.contrib.nn.SyncBatchNorm,
norm_kwargs={'num_devices': len(ctx)}, classes = ['ambr_box', 'gorch_box'])
async_net = get_model(net_name, pretrained_base=False) # used by cpu worker
else:
net = get_model(net_name, pretrained_base=True, norm_layer=gluon.nn.BatchNorm, classes = ['ambr_box', 'gorch_box'])
async_net = net
but I still got the issue. Please help.
Traceback (most recent call last): File "train_ssd.py", line 425, in <module> train(net, train_data, val_data, eval_metric, ctx, args) File "train_ssd.py", line 310, in train for i, batch in enumerate(train_data): File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 689, in __iter__ for item in t: File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 699, in same_process_iter ret = self._batchify_fn([self._dataset[idx] for idx in batch]) File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 699, in <listcomp> ret = self._batchify_fn([self._dataset[idx] for idx in batch]) File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataset.py", line 219, in __getitem__ return self._fn(*item) File "/usr/local/lib/python3.8/dist-packages/gluoncv/data/transforms/presets/ssd.py", line 167, in __call__ bbox = tbbox.translate(label, x_offset=expand[0], y_offset=expand[1]) File "/usr/local/lib/python3.8/dist-packages/gluoncv/data/transforms/bbox.py", line 160, in translate bbox[:, :2] += (x_offset, y_offset) IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hello! I used the
train_ssd.py
before with one class and everything worked fine. Now I have a two class dataset and modified thetrain_ssd.py
this way: 1) Added class2) Modified get_dataset:
3) Modified name == 'main':
but I still got the issue. Please help.
Traceback (most recent call last): File "train_ssd.py", line 425, in <module> train(net, train_data, val_data, eval_metric, ctx, args) File "train_ssd.py", line 310, in train for i, batch in enumerate(train_data): File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 689, in __iter__ for item in t: File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 699, in same_process_iter ret = self._batchify_fn([self._dataset[idx] for idx in batch]) File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataloader.py", line 699, in <listcomp> ret = self._batchify_fn([self._dataset[idx] for idx in batch]) File "/usr/local/lib/python3.8/dist-packages/mxnet/gluon/data/dataset.py", line 219, in __getitem__ return self._fn(*item) File "/usr/local/lib/python3.8/dist-packages/gluoncv/data/transforms/presets/ssd.py", line 167, in __call__ bbox = tbbox.translate(label, x_offset=expand[0], y_offset=expand[1]) File "/usr/local/lib/python3.8/dist-packages/gluoncv/data/transforms/bbox.py", line 160, in translate bbox[:, :2] += (x_offset, y_offset) IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed