Open 695002684 opened 2 years ago
请问是怎么导出部署模型的,--fixed_input_shape
有指定吗?
@will-jl944 ,你好
使用的paddlex中提供的部署方法 --export_inference --model_dir=./output/deeplabv3p_r50vd/best_model/ --save_dir=./inference_model --fixed_input_shape=[224,224],导出的模型 --fixed_input_shape指定为[256,256],模型使用的是PPYOLO
@will-jl944
- 确定是PPYOLO而不是DeepLabV3+对吧?
- paddlepaddle的版本号是多少呢?
1、确定是PPYOLO,下面是错误信息
2022-02-17 11:15:49 [INFO] Model[PPYOLO] loaded.
Found 33 inference images in total.
['D:\gs\wuzi\tu\test_png1\1795.png', 'D:\gs\wuzi\tu\test_png1\1807.png', 'D:\gs\wuzi\tu\test_png1\1812.png', 'D:\gs\wuzi\tu\test_png1\1805.png', 'D:\gs\wuzi\tu\test_png1\1790.png', 'D:\gs\wuzi\tu\test_png1\1806.png', 'D:\gs\wuzi\tu\test_png1\1808.png', 'D:\gs\wuzi\tu\test_png1\1798.png', 'D:\gs\wuzi\tu\test_png1\1792.png', 'D:\gs\wuzi\tu\test_png1\1814.png', 'D:\gs\wuzi\tu\test_png1\1787.png', 'D:\gs\wuzi\tu\test_png1\1794.png', 'D:\gs\wuzi\tu\test_png1\1791.png', 'D:\gs\wuzi\tu\test_png1\1813.png', 'D:\gs\wuzi\tu\test_png1\1793.png', 'D:\gs\wuzi\tu\test_png1\1799.png', 'D:\gs\wuzi\tu\test_png1\1800.png', 'D:\gs\wuzi\tu\test_png1\1802.png', 'D:\gs\wuzi\tu\test_png1\1788.png', 'D:\gs\wuzi\tu\test_png1\1797.png', 'D:\gs\wuzi\tu\test_png1\1789.png', 'D:\gs\wuzi\tu\test_png1\1782.png', 'D:\gs\wuzi\tu\test_png1\1804.png', 'D:\gs\wuzi\tu\test_png1\1803.png', 'D:\gs\wuzi\tu\test_png1\1809.png', 'D:\gs\wuzi\tu\test_png1\1811.png', 'D:\gs\wuzi\tu\test_png1\1796.png', 'D:\gs\wuzi\tu\test_png1\1785.png', 'D:\gs\wuzi\tu\test_png1\1810.png', 'D:\gs\wuzi\tu\test_png1\1801.png', 'D:\gs\wuzi\tu\test_png1\1783.png', 'D:\gs\wuzi\tu\test_png1\1784.png', 'D:\gs\wuzi\tu\test_png1\1786.png']
Traceback (most recent call last):
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "c:\Users\THINKER.vscode\extensions\ms-python.python-2022.0.1814523869\pythonFiles\lib\python\debugpy__main.py", line 45, in
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\Scripts\paddlex.exe\__main__.py", line 7, in <module>
sys.exit(main())
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddlex\command.py", line 158, in main
model._export_inference_model(args.save_dir, fixed_input_shape)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddlex\cv\models\base.py", line 606, in _export_inference_model
paddle.jit.save(static_net, osp.join(save_dir, 'model'))
File "<decorator-gen-101>", line 2, in save
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\base.py", line 40, in __impl__
return func(*args, **kwargs)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\jit.py", line 744, in save
inner_input_spec)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\program_translator.py", line 517, in concrete_program_specify_input_spec
*desired_input_spec)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\program_translator.py", line 427, in get_concrete_program
concrete_program, partial_program_layer = self._program_cache[cache_key]
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\program_translator.py", line 744, in __getitem__
self._caches[item] = self._build_once(item)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\program_translator.py", line 735, in _build_once
**cache_key.kwargs)
File "<decorator-gen-99>", line 2, in from_func_spec
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\wrapped_decorator.py", line 25, in __impl__
return wrapped_func(*args, **kwargs)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\base.py", line 40, in __impl__
return func(*args, **kwargs)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\program_translator.py", line 683, in from_func_spec
outputs = static_func(*inputs)
File "C:\Users\THINKER\AppData\Local\Temp\tmp7_hcd4rj.py", line 86, in forward
false_fn_4, (), (inputs, self), (out,))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 211, in convert_ifelse
out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 271, in _run_py_ifelse
return true_fn(*true_args) if pred else false_fn(*false_args)
File "C:\Users\THINKER\AppData\Local\Temp\tmp7_hcd4rj.py", line 69, in false_fn_4
for_loop_body_0, [inputs_list, __for_loop_var_index_0])
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 45, in convert_while_loop
loop_vars = _run_py_while(cond, body, loop_vars)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 59, in _run_py_while
loop_vars = body(*loop_vars)
File "C:\Users\THINKER\AppData\Local\Temp\tmp7_hcd4rj.py", line 64, in for_loop_body_0
dy2static.convert_call(self.get_pred)())
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddlex\ppdet\modeling\architectures\yolo.py", line 124, in get_pred
return self._forward()
File "C:\Users\THINKER\AppData\Local\Temp\tmpi3_m_vxq.py", line 98, in _forward
__return_value_0, neck_feats, self), (__return_value_0,))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 211, in convert_ifelse
out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 271, in _run_py_ifelse
return true_fn(*true_args) if pred else false_fn(*false_args)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddlex\ppdet\modeling\architectures\yolo.py", line 96, in _forward
yolo_head_outs = self.yolo_head(neck_feats)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\layers.py", line 914, in __call__
outputs = self.forward(*inputs, **kwargs)
File "C:\Users\THINKER\AppData\Local\Temp\tmp0yjr33jj.py", line 120, in forward
(__return_value_3, i, yolo_outputs))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 211, in convert_ifelse
out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 271, in _run_py_ifelse
return true_fn(*true_args) if pred else false_fn(*false_args)
File "C:\Users\THINKER\AppData\Local\Temp\tmp0yjr33jj.py", line 115, in false_fn_32
self, yolo_outputs), (__return_value_3, i, yolo_outputs)))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 211, in convert_ifelse
out = _run_py_ifelse(pred, true_fn, false_fn, true_args, false_args)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 271, in _run_py_ifelse
return true_fn(*true_args) if pred else false_fn(*false_args)
File "C:\Users\THINKER\AppData\Local\Temp\tmp0yjr33jj.py", line 101, in true_fn_31
__for_loop_var_index_7])
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 45, in convert_while_loop
loop_vars = _run_py_while(cond, body, loop_vars)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\dygraph\dygraph_to_static\convert_operators.py", line 59, in _run_py_while
loop_vars = body(*loop_vars)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddlex\ppdet\modeling\heads\yolo_head.py", line 105, in forward
x = x.reshape((b, na, no, h * w))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\tensor\manipulation.py", line 2001, in reshape
return paddle.fluid.layers.reshape(x=x, shape=shape, name=name)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\layers\nn.py", line 6273, in reshape
"XShape": x_shape})
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\layer_helper.py", line 43, in append_op
return self.main_program.current_block().append_op(*args, **kwargs)
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\framework.py", line 3184, in append_op
attrs=kwargs.get("attrs", None))
File "C:\Users\THINKER\anaconda3\envs\my_paddlex\lib\site-packages\paddle\fluid\framework.py", line 2224, in __init__
for frame in traceback.extract_stack():
InvalidArgumentError: The 'shape' in ReshapeOp is invalid. The input tensor X'size must be equal to the capacity of 'shape'. But received X's shape = [33, 24, 8, 8], X's size =
50688, 'shape' is [1, 3, 8, 64], the capacity of 'shape' is 1536. [Hint: Expected capacity == in_size, but received capacity:1536 != in_size:50688.] (at ..\paddle/fluid/operators/reshape_op.cc:224) [operator < reshape2 > error]
2、paddlex版本号为2.1.0
1、确定是PPYOLO,下面是错误信息
好的,我们复现一下
paddlex版本号为2.1.0
我指的是paddlepaddle的版本号,不是PaddleX的。
@will-jl944 不好意思,paddlepaddle的版本号是2.2.1
--fixed_input_shape=[224,224],导出的模型 --fixed_input_shape指定为[256,256],模型使用的是PPYOLO
我们分别使用[224,224]和[256,256]都没能复现出这个错误,可以提供一下你"D:\paddlex_workspace\projects\P0012\T0064\output\best_model\inference_model"
文件夹下的model.yml
文件的内容吗?
--fixed_input_shape=[224,224],导出的模型 --fixed_input_shape指定为[256,256],模型使用的是PPYOLO
我们分别使用[224,224]和[256,256]都没能复现出这个错误,可以提供一下你
"D:\paddlex_workspace\projects\P0012\T0064\output\best_model\inference_model"
文件夹下的model.yml
文件的内容吗? Model: PPYOLO Transforms:
- Resize: interp: CUBIC keep_ratio: false target_size:
- 256
- 256
- Normalize: is_scale: true max_val:
- 255.0
- 255.0
- 255.0 mean:
- 0.485
- 0.456
- 0.406 min_val:
- 0
- 0
- 0 std:
- 0.229
- 0.224
- 0.225 _Attributes: eval_metrics: bbox_map: 85.92018147645297 fixed_input_shape:
- 1
- 3
- 256
- 256 labels:
- Different colours
- Lable
- Long Scratch model_type: detector num_classes: 3 _init_params: anchor_masks: null anchors: null backbone: ResNet18_vd ignore_threshold: 0.7 label_smooth: false nms_iou_threshold: 0.45 nms_keep_topk: 100 nms_score_threshold: 0.01 nms_topk: -1 num_classes: 3 scale_x_y: 1.05 use_coord_conv: true use_drop_block: true use_iou_aware: true use_iou_loss: true use_matrix_nms: true use_spp: true completed_epochs: 0 status: Infer version: 2.1.0
--fixed_input_shape=[224,224],导出的模型 --fixed_input_shape指定为[256,256],模型使用的是PPYOLO
我们分别使用[224,224]和[256,256]都没能复现出这个错误,可以提供一下你
"D:\paddlex_workspace\projects\P0012\T0064\output\best_model\inference_model"
文件夹下的model.yml
文件的内容吗? 这个所使用预测代码
import paddlex as pdx import os import glob
def get_test_images(infer_dir): """ Get image path list in TEST mode """
# "--infer_img or --infer_dir should be set"
# assert infer_img is None or os.path.isfile(infer_img), \
# "{} is not a file".format(infer_img)
assert infer_dir is None or os.path.isdir(infer_dir), \
"{} is not a directory".format(infer_dir)
# infer_img has a higher priority
# if infer_img and os.path.isfile(infer_img):
# return [infer_img]
images = set()
infer_dir = os.path.abspath(infer_dir)
assert os.path.isdir(infer_dir), \
"infer_dir {} is not a directory".format(infer_dir)
exts = ['jpg', 'jpeg', 'png', 'bmp']
exts += [ext.upper() for ext in exts]
for ext in exts:
images.update(glob.glob('{}/*.{}'.format(infer_dir, ext)))
images = list(images)
assert len(images) > 0, "no image found in {}".format(infer_dir)
print("Found {} inference images in total.".format(len(images)))
return images
predictor = pdx.deploy.Predictor(r'D:\paddlex_workspace\projects\P0012\T0064\output\best_model\inference_model') image_dir = r'D:\gs\wuzi\tu\test_png1' img_list = get_test_images(image_dir) result = predictor.batch_predict(img_list)
问题定位出来了,稍后我们会进行修复。
在我们进行修复之前,你这边导出的时候指定图像尺寸改为--fixed_input_shape=[-1,3,height,width]
(即明确指定输入batch size为-1),就可以绕过这个问题。
感谢反馈。
问题定位出来了,稍后我们会进行修复。
在我们进行修复之前,你这边导出的时候指定图像尺寸改为
--fixed_input_shape=[-1,3,height,width]
(即明确指定输入batch size为-1),就可以绕过这个问题。感谢反馈。
可以运行了,期待你们的修复
在使用批量预测代码后,发现时间并没有明显改善,代码和上面提到的代码一致。 单张预测: ------------------ Inference Time Info ---------------------- total_time(ms): 19.0, img_num: 1, batch_size: 1 average latency time(ms): 19.00, QPS: 52.631579 preprocess_time_per_im(ms): 2.00, inference_time_per_batch(ms): 16.00, postprocess_time_per_im(ms): 1.00
批量预测: ------------------ Inference Time Info ---------------------- total_time(ms): 31258.5, img_num: 28, batch_size: 28 average latency time(ms): 312.58, QPS: 3.199130 preprocess_time_per_im(ms): 1.90, inference_time_per_batch(ms): 257.40, postprocess_time_per_im(ms): 0.10
请问:如何能够有效提高批量预测的速度?
你好,请问我在使用paddleX2.1,使用python部署训练好PPYOLO,在预测时,使用predictor.batch_predict(img_list),报出如下错误 InvalidArgumentError: The 'shape' in ReshapeOp is invalid. The input tensor X'size must be equal to the capacity of 'shape'. But received X's shape = [33, 24, 8, 8], X's size = 50688, 'shape' is [1, 3, 8, 64], the capacity of 'shape' is 1536. [Hint: Expected capacity == in_size, but received capacity:1536 != in_size:50688.] (at ..\paddle/fluid/operators/reshape_op.cc:224) [operator < reshape2 > error] predictor.batch_predict(img_list)
我想进行多张图片的预测,该怎么修改呢? 代码如下 predictor = pdx.deploy.Predictor(r'D:\paddlex_workspace\projects\P0012\T0064\output\best_model\inference_model') image_dir = r'D:\gs\wuzi\tu\test_png1' img_list = get_test_images(image_dir) predictor.batch_predict(img_list)