PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.78k stars 2.88k forks source link

如果想使用paddle-inference是不是的安装paddle-inference,只是安装了paddlepaddle无法使用paddle-inference啊?? #2983

Open dengxinlong opened 3 years ago

qingqing01 commented 3 years ago

@dengxinlong

如果是PaddleInference的Python推理,只安装paddlepaddle即可。 如果是PaddleInference的C++推理,需要安装Paddle-inference推理库。

dengxinlong commented 3 years ago

@dengxinlong

如果是PaddleInference的Python推理,只安装paddlepaddle即可。 如果是PaddleInference的C++推理,需要安装Paddle-inference推理库。

唉,感觉你们的文档说的不清楚啊,并且文档有点乱。在一篇文档中说着说着,就需要跳到另一个文档中,这个文档可能又需要调到再另一个文档,无限递归。。。。。。

qingqing01 commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

dengxinlong commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

还有个问题就是,文档中给的demo是图像分类的,如果是目标检测,也是同样的使用流程是吗??

dengxinlong commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

(test) coded@coded-desktop:~/PaddleDetection_old/cuda_linux_demo$ python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg
WARNING: AVX is not support on your machine. Hence, no_avx core will be imported, It has much worse preformance than avx core.
/home/coded/.local/virtualenvs/test/lib/python3.6/site-packages/paddle/utils/cpp_extension/extension_utils.py:461: UserWarning: Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it.
  "Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it."
W0513 15:13:17.540685 15737 analysis_predictor.cc:677] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
I0513 15:13:17.540961 15737 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
Traceback (most recent call last):
  File "model_test.py", line 63, in <module>
    predictor = create_predictor(config)
MemoryError: std::bad_alloc

运行的命令:python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg

报错:std::bad_alloc,也就是无法分配所需要的内存??

qingqing01 commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

还有个问题就是,文档中给的demo是图像分类的,如果是目标检测,也是同样的使用流程是吗??

你具体指的哪个文档?

qingqing01 commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

(test) coded@coded-desktop:~/PaddleDetection_old/cuda_linux_demo$ python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg
WARNING: AVX is not support on your machine. Hence, no_avx core will be imported, It has much worse preformance than avx core.
/home/coded/.local/virtualenvs/test/lib/python3.6/site-packages/paddle/utils/cpp_extension/extension_utils.py:461: UserWarning: Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it.
  "Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it."
W0513 15:13:17.540685 15737 analysis_predictor.cc:677] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
I0513 15:13:17.540961 15737 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
Traceback (most recent call last):
  File "model_test.py", line 63, in <module>
    predictor = create_predictor(config)
MemoryError: std::bad_alloc

运行的命令:python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg

报错:std::bad_alloc,也就是无法分配所需要的内存??

这又是哪个文档? Paddle Inference的Python部署参考 https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/python 这个即可。

dengxinlong commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

(test) coded@coded-desktop:~/PaddleDetection_old/cuda_linux_demo$ python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg
WARNING: AVX is not support on your machine. Hence, no_avx core will be imported, It has much worse preformance than avx core.
/home/coded/.local/virtualenvs/test/lib/python3.6/site-packages/paddle/utils/cpp_extension/extension_utils.py:461: UserWarning: Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it.
  "Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it."
W0513 15:13:17.540685 15737 analysis_predictor.cc:677] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
I0513 15:13:17.540961 15737 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
Traceback (most recent call last):
  File "model_test.py", line 63, in <module>
    predictor = create_predictor(config)
MemoryError: std::bad_alloc

运行的命令:python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg 报错:std::bad_alloc,也就是无法分配所需要的内存??

这又是哪个文档? Paddle Inference的Python部署参考 https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/python 这个即可。

https://paddleinference.paddlepaddle.org.cn/demo_tutorial/cuda_jetson_demo.html image 我看的是这篇文档

dengxinlong commented 3 years ago

@dengxinlong 文档会优化下,部署文档也在优化。

(test) coded@coded-desktop:~/PaddleDetection_old/cuda_linux_demo$ python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg
WARNING: AVX is not support on your machine. Hence, no_avx core will be imported, It has much worse preformance than avx core.
/home/coded/.local/virtualenvs/test/lib/python3.6/site-packages/paddle/utils/cpp_extension/extension_utils.py:461: UserWarning: Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it.
  "Not found CUDA runtime, please use `export CUDA_HOME= XXX` to specific it."
W0513 15:13:17.540685 15737 analysis_predictor.cc:677] The one-time configuration of analysis predictor failed, which may be due to native predictor called first and its configurations taken effect.
I0513 15:13:17.540961 15737 analysis_predictor.cc:155] Profiler is deactivated, and no profiling report will be generated.
Traceback (most recent call last):
  File "model_test.py", line 63, in <module>
    predictor = create_predictor(config)
MemoryError: std::bad_alloc

运行的命令:python model_test.py --model_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --params_file ../inferModel/ssdlite_mobilenet_v3_large_fpn/ --img_path ../dataset/Udacity/coco/images/1478896958643831455.jpg 报错:std::bad_alloc,也就是无法分配所需要的内存??

这又是哪个文档? Paddle Inference的Python部署参考 https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/python 这个即可。

不是,主要是你们一下这个文档,一下哪个文档,我想在jetson xavier nx上使用tensorRT,但是这几天我这几个文档来回跳,问题始终没有解决。 我使用的paddleDetection 是 release 2.0-rc分支。 https://github.com/PaddlePaddle/PaddleDetection/tree/develop/deploy/python 这个是你发的文档。 然后因为在jetson xavier nx上使用,所以又有这篇文档:https://paddleinference.paddlepaddle.org.cn/demo_tutorial/cuda_jetson_demo.html

qingqing01 commented 3 years ago

检测也提供了Jetson部署示例,只不过是TX上的,静态图训练模型的版本 https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/cpp/docs/Jetson_build.md

dengxinlong commented 3 years ago

检测也提供了Jetson部署示例,只不过是TX上的,静态图训练模型的版本 https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/cpp/docs/Jetson_build.md

文档中提供的python预测部署也是可以在jetson xavier nx做目标检测的吧。

检测也提供了Jetson部署示例,只不过是TX上的,静态图训练模型的版本 https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/cpp/docs/Jetson_build.md

image 在 paddleDetection release 2.0-rc 上使用trt_int8是不是不支持啊?

dengxinlong commented 3 years ago

检测也提供了Jetson部署示例,只不过是TX上的,静态图训练模型的版本 https://github.com/PaddlePaddle/PaddleDetection/blob/develop/static/deploy/cpp/docs/Jetson_build.md

是我使用paddleDetection 版本的问题导致不支持trt-int8还是其他问题??