Closed czHappy closed 3 years ago
@yarkable Could you please help me to solve the problem? Thank you very much!
It seems that you have some problems while importing modules, maybe you can check if you have a __init__.py
in every package.
It seems that you have some problems while importing modules, maybe you can check if you have a
__init__.py
in every package.
This is the result of 'tree' command of the project root directory:
[sk49@g33 ~/new_workspace/cz/MODNet-master]$ tree
.
|-- demo
| |-- image_matting
| | `-- colab
| | |-- inference.py
| | `-- README.md
| `-- video_matting
| |-- custom
| | |-- README.md
| | |-- requirements.txt
| | `-- run.py
| `-- webcam
| |-- README.md
| |-- requirements.txt
| `-- run.py
|-- doc
| `-- gif
| |-- homepage_demo.gif
| |-- image_matting_demo.gif
| `-- video_matting_demo.gif
|-- onnx
| |-- export_onnx.py
| |-- inference_onnx.py
| |-- __init__.py
| |-- modnet_onnx.py
| |-- README.md
| `-- requirements.txt
|-- pretrained
| |-- mobilenetv2_human_seg.ckpt
| |-- modnet_photographic_portrait_matting.ckpt
| |-- modnet_webcam_portrait_matting.ckpt
| `-- README.md
|-- README.md
|-- src
| |-- __init__.py
| |-- models
| | |-- backbones
| | | |-- __init__.py
| | | |-- mobilenetv2.py
| | | |-- __pycache__
| | | | |-- __init__.cpython-36.pyc
| | | | |-- __init__.cpython-37.pyc
| | | | |-- mobilenetv2.cpython-36.pyc
| | | | |-- mobilenetv2.cpython-37.pyc
| | | | |-- wrapper.cpython-36.pyc
| | | | `-- wrapper.cpython-37.pyc
| | | `-- wrapper.py
| | |-- __init__.py
| | |-- modnet.py
| | `-- __pycache__
| | |-- __init__.cpython-36.pyc
| | `-- __init__.cpython-37.pyc
| |-- __pycache__
| | |-- __init__.cpython-36.pyc
| | `-- __init__.cpython-37.pyc
| `-- trainer.py
`-- torchscript
|-- export_torchscript.py
|-- __init__.py
|-- __init__.pyc
|-- modnet_torchscript.py
|-- __pycache__
| |-- export_torchscript.cpython-36.pyc
| |-- export_torchscript.cpython-37.pyc
| |-- __init__.cpython-36.pyc
| |-- __init__.cpython-37.pyc
| |-- modnet_torchscript.cpython-36.pyc
| `-- modnet_torchscript.cpython-37.pyc
`-- README.md
I cloned the whole project and downloaded the models to pretrained folder. Then I use the command in project root directory:
python3 -m torchscript.export_torchscript \
--ckpt-path=pretrained/modnet_photographic_portrait_matting.ckpt \
--output-path=pretrained/modnet_photographic_portrait_matting.torchscript
So I think the init.py is OK. Could you tell me your environment details or other possible errors in my operations ?
By the way, I try to use the modnet.pt directly to build executable file, but fail to load the model . The libtorch version is libtorch-cxx11-abi-shared-with-deps-1.3.1 It is cpu version of libtorch for linux.
CMakeLists.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
example-app.cpp:
#include <torch/script.h> // One-stop header.
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
}
build successfully but fail to execute ./example-app:
command: ./example-app ../modnet.pt result: error loading the model.
Could you please tell us the environment required and how to use the modnet.pt in cpp?
Hi, I just clone the project and download official pretrained model, then I type the command
python3 -m torchscript.export_torchscript \
--ckpt-path=pretrained/modnet_photographic_portrait_matting.ckpt \
--output-path=pretrained/modnet_photographic_portrait_matting.torchscript
It works successfully, maybe you should clone it once again and see whether the error still occurs?
Btw, I just convert it to TorchScript version for IOS device, we have no problem using modnet.pt
.
What is your torch version and libtorch version, please?
Btw, here is my code for classification using C++.
int main(int argc, char * argv[]){
if (argc != 2) {
std::cerr << "no module found !\n";
return -1;
}
torch::jit::script::Module module;
try {
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e){
std::cerr << "error loading the module\n";
return -1;
}
std::cout << "ok!\n";
/////////////////////////////////////////////////
string path = "/pytorch-deployment/assets/3.jpg";
Mat img = imread(path), img_float;
cvtColor(img, img, CV_BGR2RGB);
bitwise_not(img, img);
vector<Mat> mv;
split(img, mv);
img = mv[1];
img.convertTo(img_float, CV_32F, 1.0 / 255);
resize(img_float, img_float , Size(28, 28));
auto img_tensor = torch::from_blob(img_float.data, {1, 28, 28, 1}, at::kFloat).permute({ 0,3,1,2 });
auto img_var = torch::autograd::make_variable(img_tensor, false);
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(img_var);
auto output = module.forward(inputs).toTensor();
cout << output << endl;
auto index = output.argmax(1);
cout << "The predicted class is : " << index << endl;
}
You can re-clone the project and generate a TorchScript model to see if your code is correct.
torch 1.6.0 and I don't have a libtorch
I tried again with torch verion 1.3.1(gpu), the same errors occured. Maybe I should install torch 1.6.0 and try again. And if you don't have a libtorch, how can you use 'torch::jit::script::Module' in your cpp file? Which headers do you include before main function?
@czHappy Lol, I used to use libtorch. But in this project, I just export it to TorchScript version and give it to the other engineer.🤣
@yarkable I use torch 1.6.0 (CPU) and modify your export script to produce a cpu torchscript model successfully. It proves that the required version (torch >= 1.2.0) is not accurate, I have tried torch 1.2.0, 1.3.1, 1.4.0 but all of them didn't work. Now I find torch 1.6.0 is OK. Then I use C++ to load the pt file and test forward function, it really works! Anyway, thanks for your excellent job and patient reply! The details are as follows:
CMakeLists.txt
cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example-app)
find_package(Torch REQUIRED)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}")
add_executable(example-app example-app.cpp)
target_link_libraries(example-app "${TORCH_LIBRARIES}")
set_property(TARGET example-app PROPERTY CXX_STANDARD 14)
# The following code block is suggested to be used on Windows.
# According to https://github.com/pytorch/pytorch/issues/25457,
# the DLLs need to be copied to avoid memory errors.
if (MSVC)
file(GLOB TORCH_DLLS "${TORCH_INSTALL_PREFIX}/lib/*.dll")
add_custom_command(TARGET example-app
POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
${TORCH_DLLS}
$<TARGET_FILE_DIR:example-app>)
endif (MSVC)
example-app.cpp
#include <torch/script.h> // One-stop header.
#include <vector>
#include <iostream>
#include <memory>
int main(int argc, const char* argv[]) {
if (argc != 2) {
std::cerr << "usage: example-app <path-to-exported-script-module>\n";
return -1;
}
torch::jit::script::Module module;
try {
// Deserialize the ScriptModule from a file using torch::jit::load().
module = torch::jit::load(argv[1]);
}
catch (const c10::Error& e) {
std::cerr << "error loading the model\n";
return -1;
}
std::cout << "ok\n";
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({1, 3, 640, 480}));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
//std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/5) << '\n';
std::cout<<output[0][0][0][0]<<std::endl;
}
Just like Minimal Example of official document, https://pytorch.org/cppdocs/installing.html my environment: ubuntu 18.04 LTS GCC 7.5.0 cmake 3.10.2 torch 1.6.0 CPU libtorch 1.6.0 CPU I think GPU torch version is OK, too.
Good job
When I export the TorchScript version of MODNet, error occurs:
Environment: Centos 7 torch 1.3.1 gpu