PaddlePaddle / PaddleX

All-in-One Development Tool based on PaddlePaddle(飞桨低代码开发工具)
Apache License 2.0
4.88k stars 957 forks source link

使用paddlex 训练出来的模型,在jetson无法使用 paddlepaddle 的paddle inference API 部署调用 #597

Open Wall-ee opened 3 years ago

Wall-ee commented 3 years ago

使用paddlex 训练出来的模型,在jetson无法使用 paddlepaddle 的paddle inference API 部署调用

采用 https://aistudio.baidu.com/aistudio/projectdetail/735759?_=1613720233023&channelType=0&channel=0 提到的部署方法,已经在jetson nano 把所有环境搭建完成。

把paddlex 训练的模型导出为 inference

然后使用paddlepaddle 1.8 的inference python api 调用时 报告,C++错误 : model need 3 feed got 1

请问如何使用paddle 的inference 调用paddlex 的模型

jiangjiajun commented 3 years ago

你这里训练的是什么模型呢?

另外如果jetson上已经安装好paddlepaddle了,你可以再继续直接安装paddlex,使用paddlex api加载模型和预测即可

Wall-ee commented 3 years ago

你这里训练的是什么模型呢?

另外如果jetson上已经安装好paddlepaddle了,你可以再继续直接安装paddlex,使用paddlex api加载模型和预测即可 1,我用 paddlex 训练的是fastrcnn的模型 2,在使用paddlepaddle 的inference 接口时,比较麻烦的就是怎么获取输入变量的各种形态,通过 get input tenor?还是什么?paddlepaddle的接口文档在这一块也不是很详细,他是用的是 fake input手动构造的数据,代表性不强…………。 我通过get input name 获取的是一个 这个东西: ['image', 'im_info', 'im_shape']

然后手动把需要用paddlex 训练的 fastrcnn 模型进行调用, 我用cv2 读取 image 但是 第二个就不知道该如何喂参数了。第三个我知道 是 image 的shape ‘im_info’

FlyingQianMM commented 3 years ago

paddlex针对paddle inference已经提供了python API了,该API在各个平台上都是通用的。在jetson上已经安装好paddle的前提下,直接使用paddlex API即可。

使用说明:https://paddlex.readthedocs.io/zh_CN/develop/deploy/server/python.html 代码:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/deploy.py

Wall-ee commented 3 years ago

嗯 采用这个文档中的方法,出现了c++报错, 和我直接调用paddlepaddle 的inference python api 报错信息一样。 主要错误就是数据输入格式问题

Exception has occurred: EnforceNotMet

代码如下: 代码:

import paddlex as pdx
predictor = pdx.deploy.Predictor('./inference_model_output',use_gpu=True,gpu_id=0,use_mkl=False,mkl_thread_num=4,use_trt=True,use_glog=False,memory_optimize=True,max_trt_batch_size=1)
image_list = ['aqm.jpg']
result = predictor.batch_predict(image_list=image_list)
print(result)

报错信息:

Exception has occurred: EnforceNotMet

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

paddlex针对paddle inference已经提供了python API了,该API在各个平台上都是通用的。在jetson上已经安装好paddle的前提下,直接使用paddlex API即可。

使用说明:https://paddlex.readthedocs.io/zh_CN/develop/deploy/server/python.html 代码:https://github.com/PaddlePaddle/PaddleX/blob/develop/paddlex/deploy.py

Wall-ee commented 3 years ago

找到一个细节,如果把use_trt 关掉 则可以使用。 所以基本上定位到细节就是: jetson nano 的 tensor rt 该如何配置的问题了

Wall-ee commented 3 years ago

我debug 了一下问题 ,基本上定位出来了,就是在

/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170

的170行 代码是:

  mutable bool applied_{false};
  std::string type_;
  std::unordered_set<std::string> required_pass_attrs_;
  std::unordered_set<std::string> default_pass_attrs_;
  std::unordered_set<std::string> required_graph_attrs_;
  std::map<std::string, boost::any> attrs_;
  std::map<std::string, std::function<void(void)>> attr_dels_;
};

其中 C++需要的 grquired_graph_attrs 需要一个 string.

python api 种 deploy 代码 119行:

min_subgraph_size=3,

所以这里是不是应该改成 string? 传一个字符串?

但是我自己改成'3',之后,报告这个错误:

Exception has occurred: TypeError
enable_tensorrt_engine(): incompatible function arguments. The following argument types are supported:
    1. (self: paddle.fluid.core_noavx.AnalysisConfig, workspace_size: int=1048576, max_batch_size: int=1, min_subgraph_size: int=3, precision_mode: paddle.fluid.core_noavx.AnalysisConfig.Precision=Precision.Float32, use_static: bool=False, use_calib_mode: bool=True) -> None

Invoked with: <paddle.fluid.core_noavx.AnalysisConfig object at 0x7f3c5947d8>; kwargs: workspace_size=1073741824, max_batch_size=1, min_subgraph_size='5', precision_mode=Precision.Float32, use_static=True, use_calib_mode=False
  File "/root/safehatDet/testDet.py", line 2, in <module>
    predictor = pdx.deploy.Predictor('./inference_model_output',use_gpu=True,gpu_id=0,use_mkl=False,mkl_thread_num=4,use_trt=True,use_glog=False,memory_optimize=T

提示了 type error 我再深入看一下

FlyingQianMM commented 3 years ago

自行编译的paddle是具备trt功能的吗?

Wall-ee commented 3 years ago

用了2种方法 1,https://aistudio.baidu.com/aistudio/projectdetail/735759?channelType=0&channel=0 里面提到的预编译包 2,https://aistudio.baidu.com/aistudio/projectdetail/969585?channelType=0&channel=0 提到的自行编译 都有如上问题

FlyingQianMM commented 3 years ago

Issue https://github.com/PaddlePaddle/Paddle/issues/27402 给出了jetpack4.3,jetson tx2上的编译过程,请参考下该编译过程。最后按照该issue给出的验证方法,先验证组网、预测、tensorrt是否均可正常使用。

Wall-ee commented 3 years ago

用您提到的脚本测试过了,所有trt 测试通过:

Issue PaddlePaddle/Paddle#27402 提到的测试方法

但是 use_trt 还是报错

测试命令及结果:

~/Paddle/python# python3 paddle/fluid/tests/unittests/ir/inference/test_conv_elementwise_add_fuse_pass.py 
W0224 12:16:57.021066  6009 device_context.cc:237] Please NOTE: device: 0, CUDA Capability: 53, Driver API Version: 10.0, Runtime API Version: 10.0
W0224 12:16:58.103266  6009 device_context.cc:245] device: 0, cuDNN Version: 7.6.
I0224 12:17:38.407675  6009 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0224 12:17:38.408592  6009 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0224 12:17:38.408638  6009 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0224 12:17:38.408670  6009 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0224 12:17:38.408761  6009 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0224 12:17:38.408788  6009 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0224 12:17:38.408810  6009 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0224 12:17:38.408831  6009 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
I0224 12:17:38.420987  6009 graph_pattern_detector.cc:101] ---  detected 1 subgraphs
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0224 12:17:38.423090  6009 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0224 12:17:38.426034  6009 analysis_predictor.cc:462] ======= optimize end =======
I0224 12:17:38.426112  6009 naive_executor.cc:105] ---  skip [feed], feed -> data
I0224 12:17:38.426245  6009 naive_executor.cc:105] ---  skip [conv2d_0.tmp_1], fetch -> fetch
.
----------------------------------------------------------------------
Ran 1 test in 63.160s

OK
Wall-ee commented 3 years ago

重新通过编译安装 opencv 包括删除之前自带的 没有cuda compiled 的 opencv 重新调用 trt 还是报错:

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

最新的系统状态如下: CUDA 10.0.326 OpenCV 4.5.1-d compiled CUDA: YES TensorRT: 6.0.1.10 VPI:0.1.0 VisionWorks:1.6.0.500n Vulkan:1.1.70 cuDNN:7.6.3.28

FlyingQianMM commented 3 years ago

测试下trt的测试代码能不能正常运行:https://github.com/PaddlePaddle/Paddle/blob/develop/python/paddle/fluid/tests/unittests/ir/inference/test_trt_subgraph_pass.py

Wall-ee commented 3 years ago

Test Result is:

W0301 15:39:45.823802  8956 device_context.cc:237] Please NOTE: device: 0, CUDA Capability: 53, Driver API Version: 10.0, Runtime API Version: 10.0
W0301 15:39:45.835938  8956 device_context.cc:245] device: 0, cuDNN Version: 7.6.
I0301 15:40:29.999703  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:40:30.000650  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:40:30.000694  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:40:30.000726  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:40:30.000752  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:40:30.000777  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:40:30.000799  8956 analysis_predictor.cc:848]  - Version incompatible (1) relu
W0301 15:40:30.000818  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:40:30.292323  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:40:30.539770  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:40:30.539875  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:40:30.540066  8956 naive_executor.cc:105] ---  skip [batch_norm_0.tmp_2], fetch -> fetch
I0301 15:40:35.537453  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:40:35.538580  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:40:35.538632  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:40:35.538668  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:40:35.538697  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:40:35.538719  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:40:35.538743  8956 analysis_predictor.cc:848]  - Version incompatible (1) relu
W0301 15:40:35.538763  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:40:35.538838  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:09.910609  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:09.911566  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:09.911607  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:09.911640  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:09.911666  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:09.911689  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:09.911715  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:09.911736  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:10.101516  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:10.104372  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:10.104450  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:10.104583  8956 naive_executor.cc:105] ---  skip [batch_norm_1.tmp_2], fetch -> fetch
I0301 15:41:10.108968  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.109846  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.109887  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.109920  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:10.109944  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.109967  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.109990  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:10.110010  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:10.110067  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:10.207973  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.208899  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.208945  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.208977  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:10.209003  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.209026  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.209048  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:10.209069  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:10.214998  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:10.218006  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:10.218092  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:10.218225  8956 naive_executor.cc:105] ---  skip [batch_norm_2.tmp_2], fetch -> fetch
I0301 15:41:10.222826  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.223938  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.223997  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.224038  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:10.224069  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.224097  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.224123  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:10.224145  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:10.224210  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:10.394487  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.395406  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.395448  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.395483  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:10.395509  8956 analysis_predictor.cc:848]  - Version incompatible (1) concat
W0301 15:41:10.395537  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.395560  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.395581  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:10.401499  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:10.404443  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:10.404527  8956 naive_executor.cc:105] ---  skip [feed], feed -> data2
I0301 15:41:10.404557  8956 naive_executor.cc:105] ---  skip [feed], feed -> data1
I0301 15:41:10.404654  8956 naive_executor.cc:105] ---  skip [batch_norm_3.tmp_2], fetch -> fetch
I0301 15:41:10.409252  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.410414  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.410470  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.410509  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:10.410538  8956 analysis_predictor.cc:848]  - Version incompatible (1) concat
W0301 15:41:10.410564  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.410589  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.410610  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:10.410672  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:10.794907  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.795852  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.795902  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.795946  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:10.795974  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.795998  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.796020  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:10.801524  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:10.803709  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:10.803786  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:10.803913  8956 naive_executor.cc:105] ---  skip [conv2d_0.tmp_0], fetch -> fetch
I0301 15:41:10.808322  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:10.809159  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:10.809206  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:10.809238  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:10.809264  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:10.809288  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:10.809307  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:10.809358  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.003710  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.004614  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.004660  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.004694  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.004720  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.004745  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.004767  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.009778  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.011833  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.011893  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.011972  8956 naive_executor.cc:105] ---  skip [conv2d_transpose_0.tmp_0], fetch -> fetch
I0301 15:41:11.016551  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.017537  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.017590  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.017624  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.017652  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.017675  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.017696  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.017747  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.095459  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.096199  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.096244  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.096278  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.096305  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.096329  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.096351  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.101169  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.103579  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.103668  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.103790  8956 naive_executor.cc:105] ---  skip [conv2d_transpose_1.tmp_0], fetch -> fetch
I0301 15:41:11.109045  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.110045  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.110093  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.110126  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.110152  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.110177  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.110196  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.110246  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.185909  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.186673  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.186728  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.186764  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:11.186792  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.186818  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.186839  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.191962  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.194345  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.194425  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.194535  8956 naive_executor.cc:105] ---  skip [conv2d_1.tmp_0], fetch -> fetch
I0301 15:41:11.199052  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.200119  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.200167  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.200199  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:11.200227  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.200249  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.200270  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.200323  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.277164  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.277897  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.277940  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.277971  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:11.277997  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.278021  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.278043  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.282970  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.285238  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.285326  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.285434  8956 naive_executor.cc:105] ---  skip [conv2d_2.tmp_0], fetch -> fetch
I0301 15:41:11.290125  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.290946  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.290999  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.291038  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d
W0301 15:41:11.291072  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.291098  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.291121  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.291177  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.368062  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.368744  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.368782  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.368814  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.368840  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.368866  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.368886  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.373687  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.375994  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.376071  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.376166  8956 naive_executor.cc:105] ---  skip [conv2d_transpose_2.tmp_0], fetch -> fetch
I0301 15:41:11.381013  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.381848  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.381896  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.381932  8956 analysis_predictor.cc:848]  - Version incompatible (2) conv2d_transpose
W0301 15:41:11.381961  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.381989  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.382009  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.382063  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.476595  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.477461  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.477504  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.477537  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.477564  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.477589  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.477613  8956 analysis_predictor.cc:848]  - Version incompatible (1) split
W0301 15:41:11.477633  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.483563  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.486105  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.486184  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.486300  8956 naive_executor.cc:105] ---  skip [batch_norm_4.tmp_2], fetch -> fetch
EI0301 15:41:11.526625  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.527515  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.527556  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.527588  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.527614  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.527638  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.527662  8956 analysis_predictor.cc:848]  - Version incompatible (1) swish
W0301 15:41:11.527683  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.533171  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.535718  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.535792  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.535892  8956 naive_executor.cc:105] ---  skip [batch_norm_5.tmp_2], fetch -> fetch
EI0301 15:41:11.681799  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.683076  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.683125  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.683209  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.683238  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_mul
W0301 15:41:11.683260  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.683334  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.683360  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.690284  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.693318  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.693403  8956 naive_executor.cc:105] ---  skip [feed], feed -> data2
I0301 15:41:11.693434  8956 naive_executor.cc:105] ---  skip [feed], feed -> data1
I0301 15:41:11.693544  8956 naive_executor.cc:105] ---  skip [batch_norm_6.tmp_2], fetch -> fetch
I0301 15:41:11.698796  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.700315  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.700371  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.700408  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.700438  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_mul
W0301 15:41:11.700464  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.700495  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.700517  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.700584  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.856894  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.857784  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.857900  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.857949  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.857983  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:41:11.858009  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.858033  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.858055  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.864490  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.867266  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.867401  8956 naive_executor.cc:105] ---  skip [feed], feed -> data2
I0301 15:41:11.867429  8956 naive_executor.cc:105] ---  skip [feed], feed -> data1
I0301 15:41:11.867529  8956 naive_executor.cc:105] ---  skip [batch_norm_7.tmp_2], fetch -> fetch
I0301 15:41:11.871762  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.873008  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.873060  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.873098  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.873128  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:41:11.873153  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.873178  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.873199  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.873262  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:11.967900  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.969117  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.969162  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.969194  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.969220  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.969242  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.969266  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:11.969303  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:41:11.976300  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:41:11.979076  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:41:11.979149  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:41:11.979269  8956 naive_executor.cc:105] ---  skip [batch_norm_8.tmp_2], fetch -> fetch
I0301 15:41:11.983433  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:11.984551  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:11.984599  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:11.984632  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:41:11.984658  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:11.984681  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:11.984704  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:41:11.984724  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:41:11.984786  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:41:27.069733  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:41:27.074415  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:41:27.074470  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:41:27.074524  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:41:27.074553  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:41:27.074576  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:41:27.074599  8956 analysis_predictor.cc:848]  - Version incompatible (2) mul
W0301 15:41:27.074621  8956 analysis_predictor.cc:848]  - Version incompatible (2) reshape2
W0301 15:41:27.074643  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
I0301 15:41:55.844378  8956 graph_pattern_detector.cc:101] ---  detected 1 subgraphs
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:42:06.143364  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:42:56.103101  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:42:56.103235  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:42:56.787240  8956 naive_executor.cc:105] ---  skip [reshape2_0.tmp_0], fetch -> fetch
I0301 15:43:00.432143  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:20.696758  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:20.696826  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:20.696890  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:43:20.696923  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:20.696949  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:20.696972  8956 analysis_predictor.cc:848]  - Version incompatible (2) mul
W0301 15:43:20.697255  8956 analysis_predictor.cc:848]  - Version incompatible (2) reshape2
W0301 15:43:20.697291  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:20.697355  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:33.022778  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.023713  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.023754  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.023790  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.023819  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.023842  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.023865  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.023885  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.392130  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.394603  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.394675  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.394779  8956 naive_executor.cc:105] ---  skip [batch_norm_9.tmp_2], fetch -> fetch
EI0301 15:43:33.440268  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.441315  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.441360  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.441393  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.441418  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.441442  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.441464  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.441486  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.447480  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.451916  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.452000  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.452111  8956 naive_executor.cc:105] ---  skip [batch_norm_10.tmp_2], fetch -> fetch
EI0301 15:43:33.494809  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.495692  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.495733  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.495765  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.495791  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.495815  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.495838  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.495859  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.502313  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.504792  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.504865  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.504966  8956 naive_executor.cc:105] ---  skip [batch_norm_11.tmp_2], fetch -> fetch
EI0301 15:43:33.549672  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.550742  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.550789  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.550822  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.550848  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.550871  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.550894  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.550915  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.556982  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.559569  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.559636  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.559736  8956 naive_executor.cc:105] ---  skip [batch_norm_12.tmp_2], fetch -> fetch
I0301 15:43:33.564133  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.565347  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.565407  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.565441  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.565469  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.565493  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.565518  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.565539  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:33.565603  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:33.660558  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.661423  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.661463  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.661494  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.661520  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.661545  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.661567  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.661587  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.668033  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.670454  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.670526  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.670631  8956 naive_executor.cc:105] ---  skip [batch_norm_13.tmp_2], fetch -> fetch
I0301 15:43:33.674873  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.675853  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.675899  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.675931  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.675957  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.675981  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.676004  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.676025  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:33.676082  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:33.767072  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.767944  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.767992  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.768026  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.768054  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.768077  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.768101  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.768121  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:33.773762  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:33.776255  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:33.776325  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:33.776423  8956 naive_executor.cc:105] ---  skip [batch_norm_14.tmp_2], fetch -> fetch
I0301 15:43:33.780148  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:33.781050  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:33.781102  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:33.781141  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:33.781172  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:33.781199  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:33.781226  8956 analysis_predictor.cc:848]  - Version incompatible (1) gelu
W0301 15:43:33.781250  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:33.781314  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:34.417786  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.418853  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.418900  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.418933  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.418958  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.418982  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.419005  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:43:34.419025  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:34.425029  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:34.427728  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:34.427798  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:34.427914  8956 naive_executor.cc:105] ---  skip [batch_norm_15.tmp_2], fetch -> fetch
I0301 15:43:34.431778  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.433156  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.433214  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.433254  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.433284  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.433311  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.433336  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:43:34.433357  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:34.433421  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:34.580821  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.581650  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.581701  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.581768  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.581799  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.581825  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.581851  8956 analysis_predictor.cc:848]  - Version incompatible (1) hard_sigmoid
W0301 15:43:34.581871  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:34.587625  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:34.590039  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:34.590108  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:34.590201  8956 naive_executor.cc:105] ---  skip [batch_norm_16.tmp_2], fetch -> fetch
I0301 15:43:34.594333  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.595201  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.595248  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.595280  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.595367  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.595396  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.595422  8956 analysis_predictor.cc:848]  - Version incompatible (1) hard_sigmoid
W0301 15:43:34.595446  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:34.595512  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:34.689306  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.690361  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.690405  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.690439  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.690464  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.690488  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.690511  8956 analysis_predictor.cc:848]  - Version incompatible (1) hard_swish
W0301 15:43:34.690531  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:34.696681  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:34.699494  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:34.699604  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:34.699717  8956 naive_executor.cc:105] ---  skip [batch_norm_17.tmp_2], fetch -> fetch
I0301 15:43:34.703845  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:34.704908  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:34.704954  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:34.704988  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:34.705013  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:34.705035  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:34.705057  8956 analysis_predictor.cc:848]  - Version incompatible (1) hard_swish
W0301 15:43:34.705077  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:34.705137  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:35.473163  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.474092  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.474144  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.474181  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:43:35.474208  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.474236  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.474261  8956 analysis_predictor.cc:848]  - Version incompatible (1) instance_norm
W0301 15:43:35.474285  8956 analysis_predictor.cc:848]  - Version incompatible (2) mul
W0301 15:43:35.474306  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
I0301 15:43:35.484629  8956 graph_pattern_detector.cc:101] ---  detected 1 subgraphs
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:35.487517  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:35.499882  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:35.500008  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:35.500169  8956 naive_executor.cc:105] ---  skip [instance_norm_0.tmp_2], fetch -> fetch
I0301 15:43:35.505560  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.506657  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.506705  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.506743  8956 analysis_predictor.cc:848]  - Version incompatible (1) elementwise_add
W0301 15:43:35.506772  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.506795  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.506819  8956 analysis_predictor.cc:848]  - Version incompatible (1) instance_norm
W0301 15:43:35.506841  8956 analysis_predictor.cc:848]  - Version incompatible (2) mul
W0301 15:43:35.506861  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:35.506917  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:35.685056  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.685727  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.685770  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.685802  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.685829  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.685854  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.685874  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:35.691118  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:35.693095  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:35.693158  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:35.693222  8956 naive_executor.cc:105] ---  skip [layer_norm_0.tmp_2], fetch -> fetch
I0301 15:43:35.696115  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.696871  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.696913  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.696949  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.696974  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.697000  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.697026  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:35.697090  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:35.789844  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.790921  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.790992  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.791043  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.791079  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.791107  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.791131  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:35.796838  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:35.799347  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:35.799429  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:35.799501  8956 naive_executor.cc:105] ---  skip [layer_norm_1.tmp_2], fetch -> fetch
I0301 15:43:35.803087  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.804054  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.804103  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.804137  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.804163  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.804186  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.804205  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:35.804258  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:35.885082  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.886054  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.886108  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.886145  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.886173  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.886221  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.886245  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:35.892115  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:35.894415  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:35.894486  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:35.894552  8956 naive_executor.cc:105] ---  skip [layer_norm_2.tmp_2], fetch -> fetch
I0301 15:43:35.897774  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.898696  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.898747  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.898778  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.898804  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.898828  8956 analysis_predictor.cc:848]  - Version incompatible (3) layer_norm
W0301 15:43:35.898847  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:35.898900  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:35.989868  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:35.990767  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:35.990811  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:35.990844  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:35.990869  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:35.990892  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:35.990916  8956 analysis_predictor.cc:848]  - Version incompatible (2) leaky_relu
W0301 15:43:35.990937  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:35.996789  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.000119  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.000195  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.000296  8956 naive_executor.cc:105] ---  skip [batch_norm_18.tmp_2], fetch -> fetch
I0301 15:43:36.004139  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.005069  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.005113  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.005146  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.005172  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.005195  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.005219  8956 analysis_predictor.cc:848]  - Version incompatible (2) leaky_relu
W0301 15:43:36.005239  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.005295  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.100548  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.101738  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.101786  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.101820  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.101846  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.101871  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.101893  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:43:36.101914  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.108078  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.111371  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.111527  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.111749  8956 naive_executor.cc:105] ---  skip [batch_norm_19.tmp_2], fetch -> fetch
I0301 15:43:36.116927  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.118458  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.118511  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.118553  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.118582  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.118604  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.118628  8956 analysis_predictor.cc:848]  - Version incompatible (1) pool2d
W0301 15:43:36.118647  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.118710  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.216006  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.216897  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.216943  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.216977  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.217002  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.217026  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.217049  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.217070  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.222800  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.225365  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.225433  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.225528  8956 naive_executor.cc:105] ---  skip [batch_norm_20.tmp_2], fetch -> fetch
I0301 15:43:36.229315  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.230226  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.230273  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.230305  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.230331  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.230360  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.230389  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.230414  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.230479  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.328769  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.329866  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.329912  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.329947  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.329973  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.329998  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.330019  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.330039  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.336647  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.339570  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.339643  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.339748  8956 naive_executor.cc:105] ---  skip [batch_norm_21.tmp_2], fetch -> fetch
I0301 15:43:36.344015  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.345194  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.345247  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.345283  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.345310  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.345335  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.345358  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.345379  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.345448  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.442144  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.443066  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.443112  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.443145  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.443169  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.443192  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.443215  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.443235  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.449210  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.452286  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.452360  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.452467  8956 naive_executor.cc:105] ---  skip [batch_norm_22.tmp_2], fetch -> fetch
I0301 15:43:36.456702  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.457682  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.457739  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.457777  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.457804  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.457828  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.457852  8956 analysis_predictor.cc:848]  - Version incompatible (1) prelu
W0301 15:43:36.457872  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.457934  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.549227  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.550217  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.550263  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.550297  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.550323  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.550345  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.550369  8956 analysis_predictor.cc:848]  - Version incompatible (1) relu6
W0301 15:43:36.550387  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.555997  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.558387  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.558447  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.558539  8956 naive_executor.cc:105] ---  skip [batch_norm_23.tmp_2], fetch -> fetch
I0301 15:43:36.562260  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.563102  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.563143  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.563201  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.563227  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.563251  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.563274  8956 analysis_predictor.cc:848]  - Version incompatible (1) relu6
W0301 15:43:36.563330  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.563393  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.710995  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.712129  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.712175  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.712211  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.712237  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.712261  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.712285  8956 analysis_predictor.cc:848]  - Version incompatible (1) shuffle_channel
W0301 15:43:36.712306  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.718278  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.720928  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.721004  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.721103  8956 naive_executor.cc:105] ---  skip [batch_norm_24.tmp_2], fetch -> fetch
I0301 15:43:36.725630  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.726977  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.727030  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.727069  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.727100  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.727126  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.727151  8956 analysis_predictor.cc:848]  - Version incompatible (1) shuffle_channel
W0301 15:43:36.727172  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.727237  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.818810  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.819720  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.819764  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.819797  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.819824  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.819847  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.819870  8956 analysis_predictor.cc:848]  - Version incompatible (1) sigmoid
W0301 15:43:36.819890  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.825453  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.827895  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.827962  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.828060  8956 naive_executor.cc:105] ---  skip [batch_norm_25.tmp_2], fetch -> fetch
I0301 15:43:36.831966  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.832840  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.832883  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.832913  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.832938  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.832962  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.832984  8956 analysis_predictor.cc:848]  - Version incompatible (1) sigmoid
W0301 15:43:36.833004  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.833060  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:36.927193  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.928349  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.928397  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.928431  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.928457  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.928479  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.928503  8956 analysis_predictor.cc:848]  - Version incompatible (1) softmax
W0301 15:43:36.928524  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:36.935003  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:36.937754  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:36.937829  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:36.937933  8956 naive_executor.cc:105] ---  skip [batch_norm_26.tmp_2], fetch -> fetch
I0301 15:43:36.942356  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:36.943507  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:36.943550  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:36.943585  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:36.943611  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:36.943635  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:36.943657  8956 analysis_predictor.cc:848]  - Version incompatible (1) softmax
W0301 15:43:36.943678  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:36.943737  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:37.154639  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.155562  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.155606  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.155642  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.155666  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.155689  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.155712  8956 analysis_predictor.cc:848]  - Version incompatible (1) split
W0301 15:43:37.155735  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:37.161501  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:37.164104  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:37.164194  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:37.164311  8956 naive_executor.cc:105] ---  skip [batch_norm_27.tmp_2], fetch -> fetch
I0301 15:43:37.167906  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.168907  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.168958  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.168992  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.169019  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.169044  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.169065  8956 analysis_predictor.cc:848]  - Version incompatible (1) split
W0301 15:43:37.169085  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:37.169144  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:37.260291  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.261082  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.261124  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.261157  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.261183  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.261207  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.261231  8956 analysis_predictor.cc:848]  - Version incompatible (1) split
W0301 15:43:37.261251  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:37.267181  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:37.269691  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:37.269762  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:37.269861  8956 naive_executor.cc:105] ---  skip [batch_norm_28.tmp_2], fetch -> fetch
I0301 15:43:37.273046  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.273916  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.273955  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.273988  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.274013  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.274039  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.274065  8956 analysis_predictor.cc:848]  - Version incompatible (1) split
W0301 15:43:37.274085  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:37.274142  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:37.368017  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.369053  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.369100  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.369133  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.369159  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.369182  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.369204  8956 analysis_predictor.cc:848]  - Version incompatible (1) swish
W0301 15:43:37.369225  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:37.375048  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:37.377590  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:37.377663  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:37.377763  8956 naive_executor.cc:105] ---  skip [batch_norm_29.tmp_2], fetch -> fetch
I0301 15:43:37.381853  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.382891  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.382936  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.382968  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.382995  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.383018  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.383045  8956 analysis_predictor.cc:848]  - Version incompatible (1) swish
W0301 15:43:37.383066  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:37.383124  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:37.474233  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.475009  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.475051  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.475083  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.475109  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.475133  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.475157  8956 analysis_predictor.cc:848]  - Version incompatible (1) swish
W0301 15:43:37.475178  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:37.480840  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:37.483495  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:37.483569  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:37.483669  8956 naive_executor.cc:105] ---  skip [batch_norm_30.tmp_2], fetch -> fetch
I0301 15:43:37.487411  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.488255  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.488298  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.488330  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.488356  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.488380  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.488402  8956 analysis_predictor.cc:848]  - Version incompatible (1) swish
W0301 15:43:37.488422  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:37.488476  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
EI0301 15:43:37.582937  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.584122  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.584168  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.584201  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.584228  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.584251  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.584275  8956 analysis_predictor.cc:848]  - Version incompatible (1) tanh
W0301 15:43:37.584295  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
--- Running IR pass [is_test_pass]
--- Running IR pass [simplify_with_basic_ops_pass]
--- Running IR pass [conv_affine_channel_fuse_pass]
--- Running IR pass [conv_eltwiseadd_affine_channel_fuse_pass]
--- Running IR pass [conv_bn_fuse_pass]
--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]
--- Running IR pass [multihead_matmul_fuse_pass]
--- Running IR pass [fc_fuse_pass]
--- Running IR pass [fc_elementwise_layernorm_fuse_pass]
--- Running IR pass [conv_elementwise_add_act_fuse_pass]
--- Running IR pass [conv_elementwise_add2_act_fuse_pass]
--- Running IR pass [conv_elementwise_add_fuse_pass]
--- Running IR pass [transpose_flatten_concat_fuse_pass]
--- Running IR pass [runtime_context_cache_pass]
--- Running analysis [ir_params_sync_among_devices_pass]
I0301 15:43:37.590291  8956 ir_params_sync_among_devices_pass.cc:41] Sync params from CPU to GPU
--- Running analysis [adjust_cudnn_workspace_size_pass]
--- Running analysis [inference_op_replace_pass]
--- Running analysis [ir_graph_to_program_pass]
I0301 15:43:37.592926  8956 analysis_predictor.cc:462] ======= optimize end =======
I0301 15:43:37.592998  8956 naive_executor.cc:105] ---  skip [feed], feed -> data
I0301 15:43:37.593096  8956 naive_executor.cc:105] ---  skip [batch_norm_31.tmp_2], fetch -> fetch
I0301 15:43:37.597155  8956 analysis_predictor.cc:84] Profiler is deactivated, and no profiling report will be generated.
I0301 15:43:37.598297  8956 analysis_predictor.cc:833] MODEL VERSION: 0.0.0
I0301 15:43:37.598352  8956 analysis_predictor.cc:835] PREDICTOR VERSION: 0.0.0
W0301 15:43:37.598387  8956 analysis_predictor.cc:848]  - Version incompatible (1) batch_norm
W0301 15:43:37.598413  8956 analysis_predictor.cc:848]  - Version incompatible (1) feed
W0301 15:43:37.598436  8956 analysis_predictor.cc:848]  - Version incompatible (1) fetch
W0301 15:43:37.598464  8956 analysis_predictor.cc:848]  - Version incompatible (1) tanh
W0301 15:43:37.598484  8956 analysis_predictor.cc:140] WARNING: Results may be DIFF! Please use the corresponding version of the model and prediction library, and do not use the develop branch.
I0301 15:43:37.598544  8956 analysis_predictor.cc:405] TensorRT subgraph engine is enabled
--- Running analysis [ir_graph_build_pass]
--- Running analysis [ir_graph_clean_pass]
--- Running analysis [ir_analysis_pass]
E
======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassActivationTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassAvgPoolTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 207, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassCeilPoolTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 207, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassConcatTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 485, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassConvTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 56, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassConvTransposeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 117, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassConvTransposeValidPaddingTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 117, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassConvValidPaddingTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 56, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassDepthwiseConvTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 56, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassDepthwiseConvTransposeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 117, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassDynamicSplitFp16SerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 556, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 120, in _get_analysis_config
    config.set_trt_dynamic_shape_info(
AttributeError: 'paddle.fluid.core_noavx.AnalysisConfig' object has no attribute 'set_trt_dynamic_shape_info'

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassDynamicSwishFp16SerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 120, in _get_analysis_config
    config.set_trt_dynamic_shape_info(
AttributeError: 'paddle.fluid.core_noavx.AnalysisConfig' object has no attribute 'set_trt_dynamic_shape_info'

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassElementwiseMulTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 648, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassElementwiseTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 648, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassExclusivePoolTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 207, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassFcTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 168, in test_check_output
    self.check_output_with_option(use_gpu, flatten=True)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluDynamicTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 120, in _get_analysis_config
    config.set_trt_dynamic_shape_info(
AttributeError: 'paddle.fluid.core_noavx.AnalysisConfig' object has no attribute 'set_trt_dynamic_shape_info'

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluFp16DynamicSerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 120, in _get_analysis_config
    config.set_trt_dynamic_shape_info(
AttributeError: 'paddle.fluid.core_noavx.AnalysisConfig' object has no attribute 'set_trt_dynamic_shape_info'

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluFp16DynamicTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 120, in _get_analysis_config
    config.set_trt_dynamic_shape_info(
AttributeError: 'paddle.fluid.core_noavx.AnalysisConfig' object has no attribute 'set_trt_dynamic_shape_info'

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluFp16SerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluFp16Test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGeluTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassGlobalPoolTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 207, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassHardSigmoidTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassHardSwishTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassInstanceNormTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 584, in test_check_output
    self.check_output_with_option(use_gpu, atol=1e-4, flatten=True)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassLayerNormBeginNormAxis2Test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 609, in test_check_output
    self.check_output_with_option(use_gpu, atol=1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassLayerNormBeginNormAxis3Test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 609, in test_check_output
    self.check_output_with_option(use_gpu, atol=1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassLayerNormTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 609, in test_check_output
    self.check_output_with_option(use_gpu, atol=1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassLeakyReluTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassPoolTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 207, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassPreluAllTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassPreluChannelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassPreluElementTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassRelu6Test)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassShuffleChannelTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 675, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSigmoidTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSoftMaxTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSplitSerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 529, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSplitTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 506, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSwishFp16SerializeTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 305, in test_check_output
    self.check_output_with_option(use_gpu, 1e-3)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassSwishTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

======================================================================
ERROR: test_check_output (__main__.TensorRTSubgraphPassTanhTest)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_trt_subgraph_pass.py", line 303, in test_check_output
    self.check_output_with_option(use_gpu)
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 180, in check_output_with_option
    use_gpu=use_gpu, use_trt=self.enable_trt))
  File "/root/Paddle/python/paddle/fluid/tests/unittests/ir/inference/inference_pass_test.py", line 76, in _get_analysis_outputs
    predictor = create_paddle_predictor(config)
paddle.fluid.core_noavx.EnforceNotMet: 

--------------------------------------------
C++ Call Stacks (More useful to developers):
--------------------------------------------
0   std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > paddle::platform::GetTraceBackString<char const*>(char const*&&, char const*, int)
1   paddle::platform::EnforceNotMet::EnforceNotMet(std::__exception_ptr::exception_ptr, char const*, int)
2   paddle::framework::ir::PassRegistry::Get(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&) const
3   paddle::inference::analysis::IRPassManager::CreatePasses(paddle::inference::analysis::Argument*, std::vector<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::allocator<std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > > > const&)
4   paddle::inference::analysis::IRPassManager::IRPassManager(paddle::inference::analysis::Argument*)
5   paddle::inference::analysis::IrAnalysisPass::RunImpl(paddle::inference::analysis::Argument*)
6   paddle::inference::analysis::Analyzer::RunAnalysis(paddle::inference::analysis::Argument*)
7   paddle::AnalysisPredictor::OptimizeInferenceProgram()
8   paddle::AnalysisPredictor::PrepareProgram(std::shared_ptr<paddle::framework::ProgramDesc> const&)
9   paddle::AnalysisPredictor::Init(std::shared_ptr<paddle::framework::Scope> const&, std::shared_ptr<paddle::framework::ProgramDesc> const&)
10  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig, (paddle::PaddleEngineKind)2>(paddle::AnalysisConfig const&)
11  std::unique_ptr<paddle::PaddlePredictor, std::default_delete<paddle::PaddlePredictor> > paddle::CreatePaddlePredictor<paddle::AnalysisConfig>(paddle::AnalysisConfig const&)

----------------------
Error Message Summary:
----------------------
Error: Pass tensorrt_subgraph_pass has not been registered at (/home/paddle/github/paddle/paddle/fluid/framework/ir/pass.h:170)

----------------------------------------------------------------------
Ran 43 tests in 250.180s

FAILED (errors=43)
FlyingQianMM commented 3 years ago

测试脚本运行结果来看,自行编译的paddle不具备trt功能。

cmake时显示指定-DWITH_TENSORRT=ON试试看。