Closed zhu2bowen closed 2 years ago
补充:用的是windows系统,上CPU,没有GPU;没有使用推理服务,耗时是上述脚本本地推理
@zhu2bowen sry 我这边没有在windows环境上测试过这个pb模型,你可以测试下同样的模型代码在ubuntu上有没有问题,或者确认下是不是tf版本不同导致的问题
@MaybeShewill-CV 谢谢。后来的测试发现主要问题是tf.nn.ctc_beam_search_decoder带来的。有两个发现:1、加载冻结网络后,在迭代过程中,第一次infer网络主体、ctc_beam_search_decoder算子的推理时间都比较长,都超过1S。第二次推理后,网络主体推理时间迅速缩短到0.1S内;而ctc_beam_search_decoder算子推理时间仍然超过1S。当ctc_beam_search_decoder的beam_width调小后,第二次推理时间也会明显缩小,越小,这种加速越明显。2、使用python.tools.optimize_for_inference_lib优化pb文件没有效果。一个猜测:冻结过程导致tensorflow运行机制下ctc_beam_search_decoder算子的加速失效。参考问题https://github.com/tensorflow/tensorflow/issues/26200
@zhu2bowen beam search确实比较慢 可以尝试下greedy search能不能兼顾精度和速度:)
RT,本地执行.\tools\test_shadownet.py,单张图片推理1S不到,转化成pb文件,加载后推理速度慢了5倍! 转化的脚本如下:
!/usr/bin/env python3
import argparse
import numpy as np import tensorflow as tf
import sys sys.path.append(r"D:\develop\crnn_tensorflow\CRNN_Tensorflow") from config import global_config from crnn_model import crnn_net
CFG = global_config.cfg OUTPUT_NODE_NAME = ["CTCBeamSearchDecoder"]
def init_args(): """
def build_saved_model(ckpt_path, pb_path): """ Convert source ckpt weights file into tensorflow saved model :param ckpt_path: :param export_dir: :return: """
build inference tensorflow graph
def ckpt2pb(ckpt_file, pb_path): """Transfer ckpt format to pb.
if name == 'main': """ build saved model """
init args
推理的脚本如下:
class Inference: def init(self, model_dir): """ :param pb_path: path of pb-model file :param with_nms: True or False, the model is with nms operations """ char_dict_path = osp.join(model_dir, "char_dict_cn.json") ord_map_dict_path = osp.join(model_dir, "ord_map_cn.json") self.codec = CharDict(char_dict_path, ord_map_dict_path)
if name == "main": import time infer = Inference(r"D:\develop\crnn_tensorflow\pretrained_models\saved_pb_diy") infer.setup() img_path = r"D:\develop\data\projects\distribution_license_detection\tests\tmp.JPG" image = cv2.imread(img_path, cv2.IMREAD_COLOR) print(time.time()) print(infer.infer(image)) print(time.time())