PaddlePaddle / PaddleSeg

Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
https://arxiv.org/abs/2101.06175
Apache License 2.0
8.67k stars 1.68k forks source link

RK3588 FastDeploy 部署 pp_liteseg 结果异常 #3555

Open chenglong-do opened 1 year ago

chenglong-do commented 1 year ago

问题确认 Search before asking

Bug描述 Describe the Bug

按照 [PaddleSeg RKNPU2 C++部署示例] 导出模型、转换模型、使用C++和Python示例代码均无法识别出结果。在X86环境下使用fastdeploy部署结果正常。(https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8/deploy/fastdeploy/semantic_segmentation/rockchip/rknpu2/cpp#paddleseg-rknpu2-c%E9%83%A8%E7%BD%B2%E7%A4%BA%E4%BE%8B)

pp_liteseg_stdc2.yml

_base_: './pp_liteseg_stdc1_camvid_960x720_10k.yml'

batch_size: 6  # total: 4*6
iters: 10000

train_dataset:
  type: Dataset
  dataset_root: /opt/drone/datasets-seg-paddle
  num_classes: 2
  mode: train
  train_path: /opt/drone/datasets-seg-paddle/train.txt
  transforms:
    - type: ResizeStepScaling
      min_scale_factor: 0.5
      max_scale_factor: 2.5
      scale_step_size: 0.25
    - type: RandomPaddingCrop
      crop_size: [960, 720]
    - type: RandomHorizontalFlip
    - type: RandomDistort
      brightness_range: 0.5
      contrast_range: 0.5
      saturation_range: 0.5
    - type: Normalize

val_dataset:
  type: Dataset
  dataset_root: /opt/drone/datasets-seg-paddle
  num_classes: 2
  mode: val
  val_path: /opt/drone/datasets-seg-paddle/val.txt
  transforms:
    - type: Normalize

optimizer:
  type: SGD
  momentum: 0.9
  weight_decay: 5.0e-4

lr_scheduler:
  type: PolynomialDecay
  learning_rate: 0.01
  end_lr: 0
  power: 0.9
  warmup_iters: 200
  warmup_start_lr: 1.0e-5

loss:
  types:
    - type: OhemCrossEntropyLoss
      min_kept: 250000   # batch_size * 960 * 720 // 16
    - type: OhemCrossEntropyLoss
      min_kept: 250000
    - type: OhemCrossEntropyLoss
      min_kept: 250000
  coef: [1, 1, 1]

model:
  _inherited_: False  # not inherit the model params from the base yaml
  type: PPLiteSeg
  backbone:
    type: STDC2
    pretrained: https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz
  arm_out_chs: [32, 64, 128]
  seg_head_inter_chs: [32, 64, 64]

导出模型命令

python tools/export.py \
        --config configs/pp_liteseg/pp_liteseg_stdc2.yml \
        --input_shape 1 3 960 720 \
        --output_op none \
       --model_path output/best_model/model.pdparams \
        --save_dir output/inference_model

转换 ONNX

paddle2onnx --model_dir PaddleSeg/output/inference_model/ \
            --model_filename model.pdmodel \
            --params_filename model.pdiparams \
            --save_file PaddleSeg/output/inference_model/infer.onnx \
            --enable_dev_version True --opset_version 11
# 日志
[Paddle2ONNX] Start to parse PaddlePaddle model...
[Paddle2ONNX] Model file path: /opt/drone/PaddleSeg/output/inference_model/model.pdmodel
[Paddle2ONNX] Paramters file path: /opt/drone/PaddleSeg/output/inference_model/model.pdiparams
[Paddle2ONNX] Start to parsing Paddle model...
[Paddle2ONNX] Use opset_version = 11 for ONNX export.
[Paddle2ONNX] PaddlePaddle model is exported as ONNX format now.

转换 rknn

mean:
  -
    - 123.675
    - 116.28
    - 103.53
std:
  -
    - 58.395
    - 57.12
    - 57.375
model_path: PaddleSeg/output/inference_model/infer.onnx
outputs_nodes:
do_quantization: True
dataset: "FastDeploy/tools/rknpu2/dataset.txt"
output_folder: "./output"
python tools/rknpu2/export.py --config_path config/pp_liteseg.yaml  --target_platform rk3588
# 日志
{'mean': [[123.675, 116.28, 103.53]], 'std': [[58.395, 57.12, 57.375]], 'model_path': '/opt/drone/PaddleSeg/output/inference_model/infer.onnx', 'outputs_nodes': None, 'do_quantization': True, 'dataset': '/opt/drone/FastDeploy/tools/rknpu2/dataset.txt', 'output_folder': './output'}
W __init__: rknn-toolkit2 version: 1.5.2+b642f30c
W load_onnx: It is recommended onnx opset 12, but your onnx model opset is 11!

I base_optimize ...
I base_optimize done.
I 
I fold_constant ...
I fold_constant done.
I fold_constant remove nodes = ['p2o.Concat.36', 'p2o.Slice.11', 'p2o.Shape.22', 'p2o.Cast.11', 'p2o.Concat.33', 'p2o.Slice.10', 'p2o.Shape.20', 'p2o.Cast.10', 'p2o.Slice.9', 'p2o.Cast.9', 'p2o.Shape.18', 'p2o.Concat.30', 'p2o.Slice.8', 'p2o.Shape.16', 'p2o.Cast.8', 'p2o.Slice.7', 'p2o.Cast.7', 'p2o.Shape.14', 'p2o.Concat.27', 'p2o.Slice.6', 'p2o.Shape.12', 'p2o.Cast.6', 'p2o.Slice.5', 'p2o.Cast.5', 'p2o.Shape.10', 'p2o.Concat.26', 'p2o.Slice.4', 'p2o.Shape.8', 'p2o.Cast.4', 'p2o.Concat.25', 'p2o.Slice.3', 'p2o.Shape.6', 'p2o.Cast.3', 'p2o.Concat.24', 'p2o.Slice.2', 'p2o.Shape.4', 'p2o.Cast.2', 'p2o.Slice.1', 'p2o.Cast.1', 'p2o.Shape.2', 'p2o.Slice.0', 'p2o.Cast.0', 'p2o.Shape.0']
I 
I Output[bilinear_interp_v2_6.tmp_0] shape with str value may cause error, replace [1, 2, 'unk__36', 'unk__37'] with [1, 2, 960, 720].
I correct_ops ...
I correct_ops done.
I 
I fuse_ops ...
I fuse_ops results:
I     fuse_bn_into_conv: remove node = ['p2o.BatchNormalization.0', 'p2o.BatchNormalization.1', 'p2o.BatchNormalization.2', 'p2o.BatchNormalization.3', 'p2o.BatchNormalization.4', 'p2o.BatchNormalization.5', 'p2o.BatchNormalization.6', 'p2o.BatchNormalization.7', 'p2o.BatchNormalization.8', 'p2o.BatchNormalization.9', 'p2o.BatchNormalization.10', 'p2o.BatchNormalization.11', 'p2o.BatchNormalization.12', 'p2o.BatchNormalization.13', 'p2o.BatchNormalization.14', 'p2o.BatchNormalization.15', 'p2o.BatchNormalization.16', 'p2o.BatchNormalization.17', 'p2o.BatchNormalization.18', 'p2o.BatchNormalization.19', 'p2o.BatchNormalization.20', 'p2o.BatchNormalization.21', 'p2o.BatchNormalization.22', 'p2o.BatchNormalization.23', 'p2o.BatchNormalization.24', 'p2o.BatchNormalization.25', 'p2o.BatchNormalization.26', 'p2o.BatchNormalization.27', 'p2o.BatchNormalization.28', 'p2o.BatchNormalization.29', 'p2o.BatchNormalization.30', 'p2o.BatchNormalization.31', 'p2o.BatchNormalization.32', 'p2o.BatchNormalization.33', 'p2o.BatchNormalization.34', 'p2o.BatchNormalization.35', 'p2o.BatchNormalization.36', 'p2o.BatchNormalization.37', 'p2o.BatchNormalization.38', 'p2o.BatchNormalization.39', 'p2o.BatchNormalization.40', 'p2o.BatchNormalization.41', 'p2o.BatchNormalization.42', 'p2o.BatchNormalization.43', 'p2o.BatchNormalization.44', 'p2o.BatchNormalization.45', 'p2o.BatchNormalization.46', 'p2o.BatchNormalization.47', 'p2o.BatchNormalization.48', 'p2o.BatchNormalization.49', 'p2o.BatchNormalization.50', 'p2o.BatchNormalization.51', 'p2o.BatchNormalization.52', 'p2o.BatchNormalization.53', 'p2o.BatchNormalization.54', 'p2o.BatchNormalization.55', 'p2o.BatchNormalization.56']
I     remove_invalid_resize: remove node = ['p2o.Resize.3']
I     fuse_bn_into_conv: remove node = ['p2o.BatchNormalization.57', 'p2o.BatchNormalization.58', 'p2o.BatchNormalization.59', 'p2o.BatchNormalization.60', 'p2o.BatchNormalization.61', 'p2o.BatchNormalization.62', 'p2o.BatchNormalization.63', 'p2o.BatchNormalization.64', 'p2o.BatchNormalization.65', 'p2o.BatchNormalization.66', 'p2o.BatchNormalization.67', 'p2o.BatchNormalization.68', 'p2o.BatchNormalization.69']
I     convert_global_avgpool_to_conv: remove node = ['p2o.GlobalAveragePool.0'], add node = ['p2o.GlobalAveragePool.0_2conv_0', 'p2o.GlobalAveragePool.1']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.2'], add node = ['p2o.ReduceMean.2_2avgpool']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.0'], add node = ['p2o.ReduceMean.0_2avgpool']
I     convert_concat_to_conv_concat: remove node = [], add node = ['p2o.ReduceMean.1_conv_p2o.Concat.28', 'p2o.ReduceMax.1_conv_p2o.Concat.28', 'p2o.ReduceMean.3_conv_p2o.Concat.28', 'p2o.Concat.29_conv']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.6'], add node = ['p2o.ReduceMean.6_2avgpool']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.4'], add node = ['p2o.ReduceMean.4_2avgpool']
I     convert_concat_to_conv_concat: remove node = [], add node = ['p2o.ReduceMean.5_conv_p2o.Concat.31', 'p2o.ReduceMax.5_conv_p2o.Concat.31', 'p2o.ReduceMean.7_conv_p2o.Concat.31', 'p2o.Concat.32_conv']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.10'], add node = ['p2o.ReduceMean.10_2avgpool']
I     convert_reduce_mean_to_avgpool: remove node = ['p2o.ReduceMean.8'], add node = ['p2o.ReduceMean.8_2avgpool']
I     convert_concat_to_conv_concat: remove node = [], add node = ['p2o.ReduceMean.9_conv_p2o.Concat.34', 'p2o.ReduceMax.9_conv_p2o.Concat.34', 'p2o.ReduceMean.11_conv_p2o.Concat.34', 'p2o.Concat.35_conv']
I     fold_constant ...
I     fold_constant done.
I fuse_ops done.

I rknn building ...
I RKNN: [15:46:28.270] compress = 0, conv_eltwise_activation_fuse = 1, global_fuse = 1, multi-core-model-mode = 7, output_optimize = 1,enable_argb_group=0
I RKNN: librknnc version: 1.5.2 (c6b7b351a@2023-08-23T07:30:34)
D RKNN: [15:46:28.308] RKNN is invoked
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.9
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.10
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.15
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.16
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.21
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.22
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.40
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.41
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.49
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.50
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.54
W RKNN: [15:46:28.408] Model initializer tensor data is empty, name: p2o.helper.constant.55
D RKNN: [15:46:28.413] >>>>>> start: N4rknn19RKNNSetOpTargetPassE
D RKNN: [15:46:28.413] <<<<<<<< end: N4rknn19RKNNSetOpTargetPassE
D RKNN: [15:46:28.413] >>>>>> start: N4rknn16RKNNAddFirstConvE
D RKNN: [15:46:28.413] <<<<<<<< end: N4rknn16RKNNAddFirstConvE
D RKNN: [15:46:28.413] >>>>>> start: N4rknn27RKNNEliminateQATDataConvertE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn27RKNNEliminateQATDataConvertE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn17RKNNTileGroupConvE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn17RKNNTileGroupConvE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn15RKNNAddConvBiasE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn15RKNNAddConvBiasE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn15RKNNTileChannelE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn15RKNNTileChannelE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn18RKNNPerChannelPrepE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn18RKNNPerChannelPrepE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn11RKNNBnQuantE
D RKNN: [15:46:28.414] <<<<<<<< end: N4rknn11RKNNBnQuantE
D RKNN: [15:46:28.414] >>>>>> start: N4rknn21RKNNFuseOptimizerPassE
D RKNN: [15:46:28.415] <<<<<<<< end: N4rknn21RKNNFuseOptimizerPassE
D RKNN: [15:46:28.415] >>>>>> start: N4rknn15RKNNTurnAutoPadE
D RKNN: [15:46:28.415] <<<<<<<< end: N4rknn15RKNNTurnAutoPadE
D RKNN: [15:46:28.415] >>>>>> start: N4rknn16RKNNInitRNNConstE
D RKNN: [15:46:28.415] <<<<<<<< end: N4rknn16RKNNInitRNNConstE
D RKNN: [15:46:28.415] >>>>>> start: N4rknn17RKNNInitCastConstE
D RKNN: [15:46:28.415] <<<<<<<< end: N4rknn17RKNNInitCastConstE
D RKNN: [15:46:28.415] >>>>>> start: N4rknn20RKNNMultiSurfacePassE
D RKNN: [15:46:28.415] <<<<<<<< end: N4rknn20RKNNMultiSurfacePassE
D RKNN: [15:46:28.415] >>>>>> start: N4rknn14RKNNTilingPassE
W RKNN: [15:46:28.417] Failed to config layer: 'Conv:p2o.GlobalAveragePool.0_2conv_0' using 2Core fallback to single core mode,
W RKNN: [15:46:28.417] core_num 2 ori_Ih 30 ori_Iw 23 ori_Ic 1024 ori_Ib 1 
W RKNN: [15:46:28.417] ori_Kh 7 ori_Kw 7 ori_Kk 1024 ori_Kc 1 ori_Ksx 7 ori_Ksy 7 
W RKNN: [15:46:28.417] ori_Oh 5 oriOw 4 oriOc 1024 pad_t 2 pad_b 3 pad_l 2 pad_r 3,
W RKNN: [15:46:28.417] Please help report this bug!
W RKNN: [15:46:28.417] Failed to config layer: 'Conv:p2o.GlobalAveragePool.0_2conv_0' using 3Core fallback to single core mode,
W RKNN: [15:46:28.417] core_num 3 ori_Ih 30 ori_Iw 23 ori_Ic 1024 ori_Ib 1 
W RKNN: [15:46:28.417] ori_Kh 7 ori_Kw 7 ori_Kk 1024 ori_Kc 1 ori_Ksx 7 ori_Ksy 7 
W RKNN: [15:46:28.417] ori_Oh 5 oriOw 4 oriOc 1024 pad_t 2 pad_b 3 pad_l 2 pad_r 3,
W RKNN: [15:46:28.417] Please help report this bug!
D RKNN: [15:46:28.417] <<<<<<<< end: N4rknn14RKNNTilingPassE
D RKNN: [15:46:28.417] >>>>>> start: OpEmit
D RKNN: [15:46:28.420] <<<<<<<< end: OpEmit
D RKNN: [15:46:28.420] >>>>>> start: N4rknn19RKNNLayoutMatchPassE
D RKNN: [15:46:28.420] <<<<<<<< end: N4rknn19RKNNLayoutMatchPassE
D RKNN: [15:46:28.420] >>>>>> start: N4rknn20RKNNAddSecondaryNodeE
D RKNN: [15:46:28.420] <<<<<<<< end: N4rknn20RKNNAddSecondaryNodeE
D RKNN: [15:46:28.420] >>>>>> start: OpEmit
W RKNN: [15:46:28.423] AveragePool count_include_pad=0, fallback to cpu
W RKNN: [15:46:28.426] AveragePool count_include_pad=0, fallback to cpu
W RKNN: [15:46:28.430] AveragePool count_include_pad=0, fallback to cpu
D RKNN: [15:46:28.440] <<<<<<<< end: OpEmit
D RKNN: [15:46:28.440] >>>>>> start: N4rknn23RKNNProfileAnalysisPassE
D RKNN: [15:46:28.440] <<<<<<<< end: N4rknn23RKNNProfileAnalysisPassE
D RKNN: [15:46:28.442] >>>>>> start: N4rknn21RKNNOperatorIdGenPassE
D RKNN: [15:46:28.442] <<<<<<<< end: N4rknn21RKNNOperatorIdGenPassE
D RKNN: [15:46:28.442] >>>>>> start: N4rknn23RKNNWeightTransposePassE
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.concat.0 need paramter qtype, type is set to float16 by default!
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.constant.9 need paramter qtype, type is set to float16 by default!
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.concat.5 need paramter qtype, type is set to float16 by default!
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.constant.49 need paramter qtype, type is set to float16 by default!
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.concat.6 need paramter qtype, type is set to float16 by default!
W RKNN: [15:46:28.558] Warning: Tensor p2o.helper.constant.54 need paramter qtype, type is set to float16 by default!
D RKNN: [15:46:28.565] <<<<<<<< end: N4rknn23RKNNWeightTransposePassE
D RKNN: [15:46:28.565] >>>>>> start: N4rknn26RKNNCPUWeightTransposePassE
D RKNN: [15:46:28.565] <<<<<<<< end: N4rknn26RKNNCPUWeightTransposePassE
D RKNN: [15:46:28.565] >>>>>> start: N4rknn18RKNNModelBuildPassE
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.GlobalAveragePool.0_2conv_0
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.GlobalAveragePool.1
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.GlobalAveragePool.1
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.59
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.59
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.59
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.59
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.59
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.63
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.63
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.63
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.63
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.63
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.67
D RKNN: [15:46:28.568] remove core consumption 2 regtasks for op Conv:p2o.Conv.67
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.67
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.67
D RKNN: [15:46:28.568] remove core consumption 3 regtasks for op Conv:p2o.Conv.67
D RKNN: [15:46:28.814] RKNNModelBuildPass: [Statistics]
D RKNN: [15:46:28.814] total_regcfg_size     :    495328
D RKNN: [15:46:28.814] total_diff_regcfg_size:    237416

D RKNN: [15:46:28.848] Total Weight Memory Size: 12214784
D RKNN: [15:46:28.848] Total Internal Memory Size: 44761984
D RKNN: [15:46:28.848] Predict Internal Memory RW Amount: 231446657
D RKNN: [15:46:28.848] Predict Weight Memory RW Amount: 12212352
D RKNN: [15:46:28.848] ----------------------------------------
D RKNN: [15:46:28.848] <<<<<<<< end: N4rknn21RKNNMemStatisticsPassE
I rknn buiding done.
W init_runtime: Target is None, use simulator!
Export OK!

使用 PaddleSeg Python 示例代码预测

python3 infer.py --model_file infer_rk3588_quantized.rknn --config_file deploy.yaml --image /opt/images/frame_0000.jpg
# 日志
[INFO] fastdeploy/vision/common/processors/transform.cc(93)::FuseNormalizeHWC2CHW       Normalize and HWC2CHW are fused to NormalizeAndPermute  in preprocessing pipeline.
[INFO] fastdeploy/vision/common/processors/transform.cc(159)::FuseNormalizeColorConvert BGR2RGB and NormalizeAndPermute are fused to NormalizeAndPermute with swap_rb=1
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(81)::GetSDKAndDeviceVersion rknpu2 runtime version: 1.5.1b19 (32afb0e92@2023-07-14T12:46:17)
[INFO] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(82)::GetSDKAndDeviceVersion rknpu2 driver version: 0.8.2
index=0, name=x, n_dims=4, dims=[1, 960, 720, 3], n_elems=2073600, size=2073600, fmt=NHWC, type=INT8, qnt_type=AFFINE, zp=-14, scale=0.018658, pass_through=0
index=0, name=bilinear_interp_v2_6.tmp_0, n_dims=4, dims=[1, 2, 960, 720], n_elems=1382400, size=1382400, fmt=NCHW, type=FP32, qnt_type=AFFINE, zp=40, scale=0.086458, pass_through=0
[INFO] fastdeploy/runtime/runtime.cc(367)::CreateRKNPU2Backend  Runtime initialized with Backend::RKNPU2 in Device::RKNPU.
[WARNING] fastdeploy/runtime/backends/rknpu2/rknpu2_backend.cc(420)::InitRKNNTensorMemoryThe input tensor type != model's inputs type.The input_type need INT8,but inputs[0].type is UINT8
SegmentationResult Image masks 10 rows x 10 cols: 
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, .....]
...........
result shape is: [1080 1920]

图片 vis_img.png 无渲染结果

模型等相关文件链接: https://pan.baidu.com/s/1KRXdXKjLg7Ehygftd5SAEQ?pwd=3hb5 提取码: 3hb5

复现环境 Environment

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?

shiyutang commented 1 year ago

这个问题属于FD的范畴,辛苦去paddle或者FD下提问