PaddlePaddle / Paddle

PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
http://www.paddlepaddle.org/
Apache License 2.0
22.29k stars 5.62k forks source link

Where is HW Benchmark ? #41888

Open tb5874 opened 2 years ago

tb5874 commented 2 years ago

问题描述 Please describe your issue

Hello. Inference tests were performed in Desktop and NVIDIA xavier NX. But i can’t compare inference result, because i don't have Reference information.

HW Specification, Paddle Install Option, Inference result as below. Is Inference-Result appropriate?

Please tell me average-result. Thank you.

/***/ [ Desktop ] Ubuntu 18.04 CPU : AMD Ryzen 5 5600G with Radeon Graphics 3.90 GHz RAM : 32GB GPU : RTX3060

-Install PP- conda create -n PPDet python=3.9 conda activate PPDet conda install paddlepaddle-gpu==2.2.2 cudatoolkit=11.2 -c https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/Paddle/ -c conda-forge

-Install PP-Detection- conda activate PPDet cd ~ && mkdir -p PPDet_Git cd PPDet_Git && git clone https://github.com/PaddlePaddle/PaddleDetection.git cd PaddleDetection python3 -m pip install cython python3 -m pip install cpython python3 -m pip install numpy python3 -m pip install -r requirements.txt python3 setup.py install

-Model Export- python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams

python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams

python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/picodet/picodet_xs_320_coco_lcnet.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams

/***/ [ Jetson xavier NX ] Ubuntu 18.04

-Install PP- cd ~ && git clone https://github.com/PaddlePaddle/Paddle.git cd Paddle git checkout release/2.2 sudo mkdir -p build_cuda && cd build_cuda

sudo cmake .. \ -DWITH_NV_JETSON=ON \ -DWITH_GPU=ON \ -DCMAKE_CUDA_COMPILER=/usr/local/cuda-10.2/bin/nvcc \ -DCMAKE_CUDA_ARCHITECTURES=72 \ -DCUDA_ARCH_NAME=All \ -DWITH_NCCL=OFF \ -DWITH_MKL=OFF \ -DWITH_MKLDNN=OFF \ -DWITH_PYTHON=ON \ -DPY_VERSION=3.6 \ -DWITH_XBYAK=OFF \ -DON_INFER=ON \ -DWITH_TESTING=OFF \ -DWITH_CONTRIB=OFF \ -DCMAKE_BUILD_TYPE=Release \ -DCMAKE_CXX_FLAGS='-Wno-error -w' \ ..

-Install PP-Detection- cd ~ && mkdir -p PPDet_Git cd PPDet_Git && git clone https://github.com/PaddlePaddle/PaddleDetection.git cd PPDet_Git && cd PaddleDetection python3 -m pip install -r requirements.txt sudo python3 setup.py install

-NVIDIA xavier NX mode- sudo nvpmodel -m 0

-Model Export- python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/ppyolo/ppyolo_r50vd_dcn_2x_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyolo_r50vd_dcn_2x_coco.pdparams

python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/ppyoloe/ppyoloe_crn_s_300e_coco.yml -o weights=https://paddledet.bj.bcebos.com/models/ppyoloe_crn_s_300e_coco.pdparams

python3 tools/export_model.py -c /home/k/PPDet_Git/PaddleDetection/configs/picodet/picodet_xs_320_coco_lcnet.yml -o weights=https://paddledet.bj.bcebos.com/models/picodet_xs_320_coco_lcnet.pdparams

/***/ [ Inference Result ]

[ Model 1, ppyolo_r50vd_dcn_2x_coco] python3 deploy/python/infer.py --model_dir=./output_inference/ppyolo_r50vd_dcn_2x_coco --image_file=./demo/000000014439_640x640.jpg --device=

[ Model 2, ppyoloe_crn_s_300e_coco]

[ Model 3, picodet_xs_320_coco_lcnet]

paddle-bot-old[bot] commented 2 years ago

您好,我们已经收到了您的问题,会安排技术人员尽快解答您的问题,请耐心等待。请您再次检查是否提供了清晰的问题描述、复现代码、环境&版本、报错信息等。同时,您也可以通过查看官网API文档常见问题历史IssueAI社区来寻求解答。祝您生活愉快~

Hi! We've received your issue and please be patient to get responded. We will arrange technicians to answer your questions as soon as possible. Please make sure that you have posted enough message to demo your request. You may also check out the APIFAQGithub Issue and AI community to get the answer.Have a nice day!

liyancas commented 2 years ago

Thanks. Do you include the warmup time in your benchmarks? Usually, the GPU warmup time is much higher than the CPU. Furthermore, both the GPU/CPU preprocesses run on the CPU, so it's unreasonable that the GPU preprocess time is too much higher than the CPU。

tb5874 commented 2 years ago

@liyancas Thank you for answer. From your answer, i understand preprocess of GPU warmup.

So now, my question is below. [1] 'Inference_time(ms)' is reasonable ? with my HW Specification and '--device=CPU' option. [2] Should i ask benchmark data with similar my HW Specification ?

I can't find Paddle HW benchmark page. Thank you.