Closed kc-w closed 1 year ago
We recommend using English or English & Chinese for issues so that we could have broader discussion.
Use model dir instead of model itself. So, you should pass E:\projectTest\mmdeploy\tools
instead
I create a new directory and move the engine model into it.
use command: image_segmentation.exe cuda E:\projectTest\mmdeploy\tools\engine D:\images\mask\img_dir\test\Image_20220626155531345.jpg
error message: loading mmdeploy_execution ... loading mmdeploy_cpu_device ... loading mmdeploy_cuda_device ... loading mmdeploy_graph ... loading mmdeploy_directory_model ... [2022-11-07 15:32:23.985] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel' loading mmdeploy_transform ... loading mmdeploy_cpu_transform_impl ... loading mmdeploy_cuda_transform_impl ... loading mmdeploy_transform_module ... loading mmdeploy_trt_net ... loading mmdeploy_net_module ... loading mmdeploy_mmcls ... loading mmdeploy_mmdet ... loading mmdeploy_mmseg ... loading mmdeploy_mmocr ... loading mmdeploy_mmedit ... loading mmdeploy_mmpose ... loading mmdeploy_mmrotate ... loading mmdeploy_mmaction ... [2022-11-07 15:32:24.125] [mmdeploy] [error] [model.cpp:15] load model failed. Its file path is 'E:\projectTest\mmdeploy\tools\engine' [2022-11-07 15:32:24.127] [mmdeploy] [error] [model.cpp:21] failed to create model: unknown (6) failed to create segmentor, code: 6
E:\projectTest\mmdeploy\build_tensorrt\bin\Release\image_segmentation.exe (进程 176060)已退出,代码为 1。 要在调试停止时自动关闭控制台,请启用“工具”->“选项”->“调试”->“调试停止时自动关闭控制台”。 按任意键关闭此窗口. .
If you want to inference model with sdk, you should add --dump-info
when convert the model. Please provide your convert command
parser = argparse.ArgumentParser(description='Export model to backends.')
parser.add_argument('--deploy_cfg',
default='E:/projectTest/mmdeploy/configs/mmseg/segmentation_tensorrt_dynamic-512x1024-2048x2048.py',
help='deploy config path')
parser.add_argument('--model_cfg', default='E:/projectTest/mmsegmentation/configs/pspnet/MyPsp.py',
help='model config path')
parser.add_argument('--checkpoint', default='E:/projectTest/mmsegmentation/result/iter_200.pth',
help='model checkpoint path')
parser.add_argument('--img', default='D:/images/mask/img_dir/train/Image_20220626151509704.jpg',
help='image used to convert model model')
parser.add_argument(
'--test-img', default=None, help='image used to test model')
parser.add_argument(
'--work-dir',
default=os.getcwd(),
help='the dir to save logs and models')
parser.add_argument(
'--calib-dataset-cfg',
help='dataset config path used to calibrate in int8 mode. If not \
specified, it will use "val" dataset in model config instead.',
default=None)
parser.add_argument(
'--device', help='device used for conversion', default='cuda')
parser.add_argument(
'--log-level',
help='set log level',
default='INFO',
choices=list(logging._nameToLevel.keys()))
parser.add_argument(
'--show', action='store_true', help='Show detection outputs')
parser.add_argument(
'--dump-info', action='store_true', help='Output information for SDK')
parser.add_argument(
'--quant-image-dir',
default=None,
help='Image directory for quantize model.')
parser.add_argument(
'--quant', action='store_true', help='Quantize model to low bit.')
parser.add_argument(
'--uri',
default='192.168.1.1:60000',
help='Remote ipv4:port or ipv6:port for inference on edge device.')
args = parser.parse_args()
According to you modification of deploy.py. You should add default value of --dump-info to True.
parser.add_argument(
'--dump-info', action='store_true', default=True, help='Output information for SDK')
After convert the model, the structure of model dir should be
-- end2end.onnx
-- end2end.engine
-- deploy.json
-- deploy.json
-- pipeline.json
There is no problem,thanks
Checklist
Describe the bug
我想使用tensorrt进行后端推理
报下列错: loading mmdeploy_execution ... loading mmdeploy_cpu_device ... loading mmdeploy_cuda_device ... loading mmdeploy_graph ... loading mmdeploy_directory_model ... [2022-11-07 15:14:31.143] [mmdeploy] [info] [model.cpp:98] Register 'DirectoryModel' loading mmdeploy_transform ... loading mmdeploy_cpu_transform_impl ... loading mmdeploy_cuda_transform_impl ... loading mmdeploy_transform_module ... loading mmdeploy_trt_net ... loading mmdeploy_net_module ... loading mmdeploy_mmcls ... loading mmdeploy_mmdet ... loading mmdeploy_mmseg ... loading mmdeploy_mmocr ... loading mmdeploy_mmedit ... loading mmdeploy_mmpose ... loading mmdeploy_mmrotate ... loading mmdeploy_mmaction ... [2022-11-07 15:14:31.287] [mmdeploy] [error] [model.cpp:45] no ModelImpl can read model E:\projectTest\mmdeploy\tools\end2end.engine [2022-11-07 15:14:31.287] [mmdeploy] [error] [model.cpp:15] load model failed. Its file path is 'E:\projectTest\mmdeploy\tools\end2end.engine' [2022-11-07 15:14:31.290] [mmdeploy] [error] [model.cpp:21] failed to create model: not supported (2) failed to create segmentor, code: 6
E:\projectTest\mmdeploy\build_tensorrt\bin\Release\image_segmentation.exe (进程 173116)已退出,代码为 1。 要在调试停止时自动关闭控制台,请启用“工具”->“选项”->“调试”->“调试停止时自动关闭控制台”。 按任意键关闭此窗口. . .
Reproduction
使用以下命令编译推理demo: cmake .. -G "Visual Studio 16 2019" -A x64 -T v142 -DMMDEPLOY_BUILD_SDK=ON -DMMDEPLOY_BUILD_EXAMPLES=ON -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON -DMMDEPLOY_TARGET_DEVICES="cuda" -DMMDEPLOY_TARGET_BACKENDS="trt" -Dpplcv_DIR="E:\projectTest\ppl.cv\pplcv-build\install\lib\cmake\ppl" -DTENSORRT_DIR="D:\TensorRT-8.4.1.5" -DCUDNN_DIR="D:\cudnn-8.4.1.50\lib"
运行推理程序输入的命令: image_segmentation.exe cuda E:\projectTest\mmdeploy\tools\end2end.engine D:\images\mask\img_dir\test\Image_20220626155531345.jpg
Environment
Error traceback
No response