I would like to ask for the question I met when I followed your step2 Run training: python train_dop.py -m C3D (Train and Test).
The issue is below, thank you.
root@interactive21748:/opt/data/private/ramp_data_sample/Radar-multiple-perspective-object-detection-main# python train_dop.py -m C3D
No data augmentation
Number of sequences to train: 1
Training files length: 111
Window size: 16
Number of epoches: 100
Batch size: 3
Number of iterations in each epoch: 37
Cyclic learning rate
epoch 1, iter 1: loss: 11009.76367188 | load time: 1.1002 | backward time: 5.7529
epoch 1, iter 2: loss: 10623.83984375 | load time: 1.8162 | backward time: 1.4582
epoch 1, iter 3: loss: 10959.95410156 | load time: 2.3343 | backward time: 1.4593
epoch 1, iter 4: loss: 10492.48828125 | load time: 0.8416 | backward time: 1.4686
epoch 1, iter 5: loss: 10667.32421875 | load time: 1.1322 | backward time: 1.5663
/opt/conda/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray
return array(a, dtype, copy=False, order=order)
Traceback (most recent call last):
File "train_dop.py", line 204, in
for iter, loaded_data in enumerate(dataloader):
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 175, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 175, in
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 178, in default_collate
return elem_type([default_collate(samples) for samples in transposed])
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 178, in
return elem_type([default_collate(samples) for samples in transposed])
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 171, in default_collate
raise RuntimeError('each element in list of batch should be of equal size')
RuntimeError: each element in list of batch should be of equal size
Hello, thanks for your excellent work!
I would like to ask for the question I met when I followed your step2 Run training: python train_dop.py -m C3D (Train and Test). The issue is below, thank you.
root@interactive21748:/opt/data/private/ramp_data_sample/Radar-multiple-perspective-object-detection-main# python train_dop.py -m C3D No data augmentation Number of sequences to train: 1 Training files length: 111 Window size: 16 Number of epoches: 100 Batch size: 3 Number of iterations in each epoch: 37 Cyclic learning rate epoch 1, iter 1: loss: 11009.76367188 | load time: 1.1002 | backward time: 5.7529 epoch 1, iter 2: loss: 10623.83984375 | load time: 1.8162 | backward time: 1.4582 epoch 1, iter 3: loss: 10959.95410156 | load time: 2.3343 | backward time: 1.4593 epoch 1, iter 4: loss: 10492.48828125 | load time: 0.8416 | backward time: 1.4686 epoch 1, iter 5: loss: 10667.32421875 | load time: 1.1322 | backward time: 1.5663 /opt/conda/lib/python3.8/site-packages/numpy/core/_asarray.py:83: VisibleDeprecationWarning: Creating an ndarray from ragged nested sequences (which is a list-or-tuple of lists-or-tuples-or ndarrays with different lengths or shapes) is deprecated. If you meant to do this, you must specify 'dtype=object' when creating the ndarray return array(a, dtype, copy=False, order=order) Traceback (most recent call last): File "train_dop.py", line 204, in
for iter, loaded_data in enumerate(dataloader):
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 652, in next
data = self._next_data()
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 692, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 175, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 175, in
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 178, in default_collate
return elem_type([default_collate(samples) for samples in transposed])
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 178, in
return elem_type([default_collate(samples) for samples in transposed])
File "/opt/conda/lib/python3.8/site-packages/torch/utils/data/_utils/collate.py", line 171, in default_collate
raise RuntimeError('each element in list of batch should be of equal size')
RuntimeError: each element in list of batch should be of equal size