I am using CondInst for Instance Segmentation on a custom dataset.
I am also using detectron2 from the commit id mentioned in the README.
This is my error log.
Traceback (most recent call last):
File "train_net_condinst.py", line 220, in <module>
args=(args,),
File "/home/sharan/code/repos/detectron2/detectron2/engine/launch.py", line 59, in launch
daemon=False,
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 200, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes
while not context.join():
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 119, in join
raise Exception(msg)
Exception:
-- Process 2 terminated with the following error:
Traceback (most recent call last):
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 20, in _wrap
fn(i, *args)
File "/home/sharan/code/repos/detectron2/detectron2/engine/launch.py", line 94, in _distributed_worker
main_func(*args)
File "/home/sharan/code/indiscapesv2-detectron2/train_net_condinst.py", line 208, in main
return trainer.train()
File "/home/sharan/code/indiscapesv2-detectron2/train_net_condinst.py", line 90, in train
self.train_loop(self.start_iter, self.max_iter)
File "/home/sharan/code/indiscapesv2-detectron2/train_net_condinst.py", line 81, in train_loop
self.after_step()
File "/home/sharan/code/repos/detectron2/detectron2/engine/train_loop.py", line 152, in after_step
h.after_step()
File "/home/sharan/code/repos/detectron2/detectron2/engine/hooks.py", line 349, in after_step
self._do_eval()
File "/home/sharan/code/repos/detectron2/detectron2/engine/hooks.py", line 323, in _do_eval
results = self._func()
File "/home/sharan/code/repos/detectron2/detectron2/engine/defaults.py", line 351, in test_and_save_results
self._last_eval_results = self.test(self.cfg, self.model)
File "/home/sharan/code/repos/detectron2/detectron2/engine/defaults.py", line 515, in test
results_i = inference_on_dataset(model, data_loader, evaluator)
File "/home/sharan/code/repos/detectron2/detectron2/evaluation/evaluator.py", line 141, in inference_on_dataset
outputs = model(inputs)
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 511, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/sharan/code/repos/AdelaiDet/adet/modeling/condinst/condinst.py", line 96, in forward
padded_im_h, padded_im_w
File "/home/sharan/code/repos/AdelaiDet/adet/modeling/condinst/condinst.py", line 208, in postprocess
results.pred_global_masks, factor
File "/home/sharan/code/repos/AdelaiDet/adet/utils/comm.py", line 26, in aligned_bilinear
tensor = F.pad(tensor, pad=(0, 1, 0, 1), mode="replicate")
File "/home/sharan/miniconda3/envs/torch/lib/python3.7/site-packages/torch/nn/functional.py", line 3571, in _pad
return torch._C._nn.replication_pad2d(input, pad)
RuntimeError: non-empty 3D or 4D (batch mode) tensor expected for input, but got: [ torch.cuda.FloatTensor{0,1,144,336} ]
I am using CondInst for Instance Segmentation on a custom dataset. I am also using detectron2 from the commit id mentioned in the README. This is my error log.
Possible duplicate of #284 How do I fix this?