Open xhpk opened 4 years ago
and sometime i also appear this error:
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
I also have this problem, can l ask you how to deal with this? Thanks RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity
and sometime i also appear this error:
Original Traceback (most recent call last): File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop data = fetcher.fetch(index) File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/data-output/SSD-master/ssd/data/datasets/voc.py", line 47, in getitem boxes, labels = self.target_transform(boxes, labels) File "/data-output/SSD-master/ssd/data/transforms/target_transform.py", line 21, in call self.corner_form_priors, self.iou_threshold) File "/data-output/SSD-master/ssd/utils/box_utils.py", line 89, in assign_priors best_target_per_prior, best_target_per_prior_index = ious.max(1) RuntimeError: cannot perform reduction function max on tensor with no elements because the operation does not have an identity
I use this mothed to solve the problem, and my project can train successfully
hello ,when i train my dataset ,it always appear this erro:
Traceback (most recent call last): File "train.py", line 114, in
main()
File "train.py", line 105, in main
model = train(cfg, args)
File "train.py", line 44, in train
model = do_train(cfg, model, train_loader, optimizer, scheduler, checkpointer, device, arguments, args)
File "/data-output/SSD-master/ssd/engine/trainer.py", line 76, in dotrain
for iteration, (images, targets, ) in enumerate(data_loader, start_iter):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 363, in next
data = self._next_data()
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 971, in _next_data
return self._process_data(data)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1014, in _process_data
data.reraise()
File "/opt/conda/lib/python3.7/site-packages/torch/_utils.py", line 395, in reraise
raise self.exc_type(msg)
TypeError: Caught TypeError in DataLoader worker process 2.
Original Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 185, in _worker_loop
data = fetcher.fetch(index)
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/opt/conda/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/data-output/SSD-master/ssd/data/datasets/voc.py", line 46, in getitem
image, boxes, labels = self.transform(image, boxes, labels)
File "/data-output/SSD-master/ssd/data/transforms/transforms.py", line 75, in call
img, boxes, labels = t(img, boxes, labels)
TypeError: cannot unpack non-iterable NoneType object