I want to use the net in C++, so I convert the net with onnx and jit. But I can't convert correctly.
I replace the C++ code to python, and use pytorch1.4.0, there are also many problems as follows:
python demo_hjimi.py --dataset hjimi
Constructed model.
Loaded checkpoint /home/hll/code/votenet_0713/votenet/log_hjimi/checkpoint.tar (epoch: 4)
Loaded point cloud data: /home/hll/code/votenet_0713/votenet/demo_files/pc_luoshuang80cm700lux100000634_20k.ply
/home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:101: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
npoint = torch.tensor(npoint)
/home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:142: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator indexput. This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe.
group_idx[mask] = group_first[mask]
/home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:311: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if int(S) == 1:
/home/hll/code/votenet_0713/votenet/models/proposal_module.py:63: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
mean_size_arr_torch = torch.from_numpy(mean_size_arr.astype(np.float32))
/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/symbolic_opset9.py:1951: UserWarning: Exporting aten::index operator of advanced indexing in opset 9 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results.
"If indices include negative values, the exported graph will produce incorrect results.")
/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py:655: UserWarning: ONNX export failed on ATen operator getitem_ because torch.onnx.symbolic_opset9.getitem_ does not exist
.format(op_name, opset_version, op_name))
Traceback (most recent call last):
File "demo_hjimi.py", line 126, in
torch.onnx.export(net, example, "modelhjimi.onnx", verbose=True, keep_initializers_as_inputs=True)# operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/init.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 416, in _export
fixed_batch_size=fixed_batch_size)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 296, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 135, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/init.py", line 179, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 656, in _run_symbolic_function
op_fn = sym_registry.get_registered_op(op_name, '', opset_version)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py", line 91, in get_registered_op
return _registry[(domain, version)][opname]
KeyError: '_getitem'
I want to use the net in C++, so I convert the net with onnx and jit. But I can't convert correctly. I replace the C++ code to python, and use pytorch1.4.0, there are also many problems as follows:
python demo_hjimi.py --dataset hjimi Constructed model. Loaded checkpoint /home/hll/code/votenet_0713/votenet/log_hjimi/checkpoint.tar (epoch: 4) Loaded point cloud data: /home/hll/code/votenet_0713/votenet/demo_files/pc_luoshuang80cm700lux100000634_20k.ply /home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:101: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. npoint = torch.tensor(npoint) /home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:142: TracerWarning: There are 2 live references to the data region being modified when tracing in-place operator indexput. This might cause the trace to be incorrect, because all other views that also reference this data will not reflect this change in the trace! On the other hand, if all other views use the same memory chunk, but are disjoint (e.g. are outputs of torch.split), this might still be safe. group_idx[mask] = group_first[mask] /home/hll/code/votenet_0713/votenet/models/pointnet_util_votenet.py:311: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if int(S) == 1: /home/hll/code/votenet_0713/votenet/models/proposal_module.py:63: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect. mean_size_arr_torch = torch.from_numpy(mean_size_arr.astype(np.float32)) /home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/symbolic_opset9.py:1951: UserWarning: Exporting aten::index operator of advanced indexing in opset 9 is achieved by combination of multiple ONNX operators, including Reshape, Transpose, Concat, and Gather. If indices include negative values, the exported graph will produce incorrect results. "If indices include negative values, the exported graph will produce incorrect results.") /home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py:655: UserWarning: ONNX export failed on ATen operator getitem_ because torch.onnx.symbolic_opset9.getitem_ does not exist .format(op_name, opset_version, op_name)) Traceback (most recent call last): File "demo_hjimi.py", line 126, in
torch.onnx.export(net, example, "modelhjimi.onnx", verbose=True, keep_initializers_as_inputs=True)# operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/init.py", line 148, in export
strip_doc_string, dynamic_axes, keep_initializers_as_inputs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 66, in export
dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 416, in _export
fixed_batch_size=fixed_batch_size)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 296, in _model_to_graph
fixed_batch_size=fixed_batch_size, params_dict=params_dict)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 135, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/init.py", line 179, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/utils.py", line 656, in _run_symbolic_function
op_fn = sym_registry.get_registered_op(op_name, '', opset_version)
File "/home/hll/miniconda3/envs/pytorch14/lib/python3.7/site-packages/torch/onnx/symbolic_registry.py", line 91, in get_registered_op
return _registry[(domain, version)][opname]
KeyError: '_getitem'