when I was running "rel_danfeiX_FPN50_nm.yaml" with VG dataset, error rised in assert l_batch==1. The whole information is as fallowed.
"
INFO:maskrcnn_benchmark.inference:Start evaluation on visualgenome/test_danfeiX_relation.yaml dataset(26446 images).
0%| | 0/6612 [00:00<?, ?it/s]/home/jinyuda/.local/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined]
0%| | 0/6612 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/home/jinyuda/scene_graph_benchmark/maskrcnn_benchmark/engine/inference.py", line 38, in compute_on_dataset
output = model(images.to(device), targets)
File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, *kwargs)
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/scene_parser.py", line 319, in forward
x_pairs, prediction_pairs, relation_losses = self.relation_head(features, predictions, targets)
File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, kwargs)
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/relation_head.py", line 211, in forward
= self.rel_predictor(features, proposals, proposal_pairs)
File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, kwargs)
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/neuralmotif.py", line 151, in forward
boxes_all
File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, *kwargs)
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/context_encoder.py", line 280, in forward
boxes_per_cls,
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/context_encoder.py", line 230, in obj_context
boxes_for_nms=boxes_per_cls[perm] if boxes_per_cls is not None else None,
File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(input, kwargs)
File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/decoder_rnn.py", line 302, in forward
assert l_batch == 1
AssertionError
python-BaseException
"
I have no idea what's going on. I have tried on motif in both VG and OI ,but all encountered error in the function named "inference". If anyone could give some suggestion, it will be really helpful to me.
when I was running "rel_danfeiX_FPN50_nm.yaml" with VG dataset, error rised in assert l_batch==1. The whole information is as fallowed. " INFO:maskrcnn_benchmark.inference:Start evaluation on visualgenome/test_danfeiX_relation.yaml dataset(26446 images). 0%| | 0/6612 [00:00<?, ?it/s]/home/jinyuda/.local/lib/python3.7/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, kwargs) # type: ignore[attr-defined] 0%| | 0/6612 [00:02<?, ?it/s] Traceback (most recent call last): File "/home/jinyuda/scene_graph_benchmark/maskrcnn_benchmark/engine/inference.py", line 38, in compute_on_dataset output = model(images.to(device), targets) File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/scene_parser.py", line 319, in forward x_pairs, prediction_pairs, relation_losses = self.relation_head(features, predictions, targets) File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/relation_head.py", line 211, in forward = self.rel_predictor(features, proposals, proposal_pairs) File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, kwargs) File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/neuralmotif.py", line 151, in forward boxes_all File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/context_encoder.py", line 280, in forward boxes_per_cls, File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/context_encoder.py", line 230, in obj_context boxes_for_nms=boxes_per_cls[perm] if boxes_per_cls is not None else None, File "/home/jinyuda/.local/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) File "/home/jinyuda/scene_graph_benchmark/scene_graph_benchmark/relation_head/neural_motif/decoder_rnn.py", line 302, in forward assert l_batch == 1 AssertionError python-BaseException " I have no idea what's going on. I have tried on motif in both VG and OI ,but all encountered error in the function named "inference". If anyone could give some suggestion, it will be really helpful to me.