Compiling bonnetal under Pytorch v1.4 gives the following error:
Errors << bonnetal_classification_lib:make /home/vdhiman/wrk/orcvio-frontend/logs/bonnetal_classification_lib/build.make.0
08.log
/home/vdhiman/wrk/orcvio-frontend/src/bonnetal/deploy/src/classification/lib/src/netPytorch.cpp: In constructor ‘bonnetal::cla
ssification::NetPytorch::NetPytorch(const string&)’:
/home/vdhiman/wrk/orcvio-frontend/src/bonnetal/deploy/src/classification/lib/src/netPytorch.cpp:21:13: error: no match for ‘op
erator=’ (operand types are ‘std::shared_ptr<torch::jit::script::Module>’ and ‘torch::jit::script::Module’)
_module = torch::jit::load(_model_path + "/model.pytorch", torch::kCUDA);
^
In file included from /usr/include/c++/5/memory:82:0,
from /home/vdhiman/wrk/orcvio-frontend/devel/include/c10/core/Allocator.h:4,
from /home/vdhiman/wrk/orcvio-frontend/devel/include/ATen/ATen.h:3,
from /home/vdhiman/wrk/orcvio-frontend/devel/include/torch/csrc/api/include/torch/types.h:3,
from /home/vdhiman/wrk/orcvio-frontend/devel/include/torch/script.h:3,
from /home/vdhiman/wrk/orcvio-frontend/src/bonnetal/deploy/src/classification/lib/include/netPytorch.hpp:7,
from /home/vdhiman/wrk/orcvio-frontend/src/bonnetal/deploy/src/classification/lib/src/netPytorch.cpp:6:
/usr/include/c++/5/bits/shared_ptr.h:271:19: note: candidate: std::shared_ptr<_Tp>& std::shared_ptr<_Tp>::operator=(const std:
:shared_ptr<_Tp>&) [with _Tp = torch::jit::script::Module]
shared_ptr& operator=(const shared_ptr&) noexcept = default;
^
...
I added a couple of wrapper functions to fix error for older and newer pytorch.
Thanks! I have some more changes for 1.4 (onnx stuff on the python side, plus newer version of tensorrt for both sides, so I will push them here and then we can make another release)
Pytorch has removed the usage of
std::shared_ptr<script::Module>
in the issue: https://github.com/pytorch/pytorch/pull/21934Compiling bonnetal under Pytorch v1.4 gives the following error:
I added a couple of wrapper functions to fix error for older and newer pytorch.