PaddlePaddle / PaddleSeg

Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
https://arxiv.org/abs/2101.06175
Apache License 2.0
8.57k stars 1.68k forks source link

HumanSeg-Mobile on Android #279

Closed InternetMaster1 closed 2 years ago

InternetMaster1 commented 4 years ago

@wuyefeilin

Is there a ready demo code to run "humanseg-mobile" on Android?

  1. I checked the Paddle-Lite repo, but I don't seem to find ready demo.
  2. I checked the Android Demo docs & the Paddle-Lite-Demo repo, but that is giving a demo based on Deeplab/MobileNet.

I am looking for a demo or Android code for "humanseg-mobile" based on HRNet "humanseg_mobile_quant"

InternetMaster1 commented 4 years ago

Do we have to convert from "humanseg_mobile_quant" to "nb" format?

I am confused?

InternetMaster1 commented 4 years ago

I used the following command format to convert the "humanseg_mobile_quant" to "nb" file using the opt tool

--model_dir= --model_file= --param_file= --optimize_out_type=(protobuf|naive_buffer) --optimize_out= --valid_targets=(arm|opencl|x86|npu|xpu)

But its giving following error

2020-06-01 21:57:59.859 9360-9387/com.baidu.paddle.lite.demo.human_segmentation A/libc: Fatal signal 11 (SIGSEGV), code 1 (SEGV_MAPERR), fault addr 0x0 in tid 9387 (Predictor Worke), pid 9360 (an_segmentation)

What am I doing incorrect?

Channingss commented 4 years ago

you can develop base on this demo, you need change .nb model, and custom preprocess and postprocess:
https://github.com/PaddlePaddle/Paddle-Lite-Demo/tree/master/PaddleLite-android-demo/human_segmentation_demo

, please list the opt tool version and command you used

InternetMaster1 commented 4 years ago

@Channingss

  1. The opt version is 2.6.1 paddlelite opt version:v2.6.1

  2. Command used for conversion

paddle_lite_opt \
    --model_file=output/quant_offline/__model__ \
    --param_file=output/quant_offline/__params__ \
    --optimize_out_type=naive_buffer \
    --optimize_out=nb_folder/Model_new \
    --valid_targets=arm

opt-error

I took reference from this link https://paddle-lite.readthedocs.io/zh/latest/user_guides/opt/opt_python.html

InternetMaster1 commented 4 years ago

@Channingss

There is one more problem..

I cloned the existing Paddle-Lite-Demo project from repository which is Portrait segmentation based on DeeplabV3+MobilNetV2.

On most devices the demo worked fine, but I am unable to run human segmentation on the below mobile device :

Device Configuration as below:

Android OS : 5.1 Ram : 3 GB Device Storage : 32 GB Device Name : Oppo F1S

06-01 13:25:58.709 27870-27894/? E/art: dlopen("/data/app/com.baidu.paddle.lite.demo.human_segmentation-2/lib/arm64/libpaddle_lite_jni.so", RTLD_LAZY) failed: dlopen failed: cannot locate symbol "__register_atfork" referenced by "libhiai_ir.so"... 06-01 13:25:58.726 27870-27894/? E/AndroidRuntime: FATAL EXCEPTION: Predictor Worker Process: com.baidu.paddle.lite.demo.human_segmentation, PID: 27870 java.lang.UnsatisfiedLinkError: dlopen failed: cannot locate symbol "__register_atfork" referenced by "libhiai_ir.so"... at java.lang.Runtime.loadLibrary(Runtime.java:372) at java.lang.System.loadLibrary(System.java:988) at com.baidu.paddle.lite.PaddleLiteInitializer.init(PaddleLiteInitializer.java:20) at com.baidu.paddle.lite.PaddlePredictor.(PaddlePredictor.java:197) at com.baidu.paddle.lite.demo.segmentation.Predictor.loadModel(Predictor.java:163) at com.baidu.paddle.lite.demo.segmentation.Predictor.init(Predictor.java:51) at com.baidu.paddle.lite.demo.segmentation.Predictor.init(Predictor.java:75) at com.baidu.paddle.lite.demo.segmentation.MainActivity.onLoadModel(MainActivity.java:144) at com.baidu.paddle.lite.demo.segmentation.MainActivity$2.handleMessage(MainActivity.java:114) at android.os.Handler.dispatchMessage(Handler.java:111) at android.os.Looper.loop(Looper.java:210)

So if at all I am able to run humanseg_mobile based on Paddle-Lite-Demo, that will also face the above problem? How to solve this error too?

InternetMaster1 commented 4 years ago

@Channingss

What do you mean by "custom preprocess and postprocess:".

Can you give a link to the exact steps where these instructions are provided?

Do I have to edit anything in these files? Preprocess.java and Predictor.java

Thanks

InternetMaster1 commented 4 years ago

@Channingss

I have spent more than two days trying everthing, it's just not working for me. If you could give me exact steps, I would be most thankful.

Channingss commented 4 years ago

@Channingss

  1. The opt version is 2.6.1 paddlelite opt version:v2.6.1
  2. Command used for conversion
paddle_lite_opt \
    --model_file=output/quant_offline/__model__ \
    --param_file=output/quant_offline/__params__ \
    --optimize_out_type=naive_buffer \
    --optimize_out=nb_folder/Model_new \
    --valid_targets=arm

opt-error

I took reference from this link https://paddle-lite.readthedocs.io/zh/latest/user_guides/opt/opt_python.html

模型的OPT过程存在错误吗? demo里面替换掉.nb模型文件,还需要对应OPT的版本替换Lite的预测库,参考: https://github.com/PaddlePaddle/Paddle-Lite-Demo#%E6%9B%B4%E6%96%B0%E5%88%B0%E6%9C%80%E6%96%B0%E7%9A%84%E9%A2%84%E6%B5%8B%E5%BA%93

InternetMaster1 commented 4 years ago

@Channingss

1) The OPT tool didn't give any error, the ,nb file was created successfully.

2) I was not aware how to replace the Lite prediction library. Let me try as per the link you have provided and I will get back to you in an hour or so.

Thanks a ton!

InternetMaster1 commented 4 years ago

@Channingss

I was able to make the library run and the model is now loading. It is not giving error.

But it is not generating a mask on the image. Do I need to change anything further in Preprocess.java or Visualize.java file since new mobileseg_human is based on HRNet?

InternetMaster1 commented 4 years ago

Here are my exact steps :

  1. I generated a model.nb file using opt version 2.6.1
  2. I have put this above generated "model.nb file" in the assets folder
  3. In String.xml, I changed the input shape default value as below (since humanseg_mobile is based on 192,192 size) <string name="INPUT_SHAPE_DEFAULT">1,3,192,192</string>
  4. I updated the prediction library as per the steps you gave

Here is the output that I am getting :

Channingss commented 4 years ago

@Channingss

I was able to make the library run and the model is now loading. It is not giving error.

But it is not generating a mask on the image. Do I need to change anything further in Preprocess.java or Visualize.java file since new mobileseg_human is based on HRNet?

you need to normalize you input image with mean and std, at this code https://github.com/PaddlePaddle/Paddle-Lite-Demo/blob/8b0c65203977615a90c43693ed3931a22311d3fd/PaddleLite-android-demo/human_segmentation_demo/app/src/main/java/com/baidu/paddle/lite/demo/segmentation/preprocess/Preprocess.java#L51

InternetMaster1 commented 4 years ago

Dear @Channingss ,

The current code at L51 is

                inputData[y * width + x] = rgb[channelIdx[0]] ;
                inputData[y * width + x + channelStride[0]] = rgb[channelIdx[1]] ;
                inputData[y * width + x + channelStride[1]] = rgb[channelIdx[2]];

What should I change this to based on humanseg_mobile hrnet?

I am not too well-versed with Android, could you guide me please. How to normalize the input image with mean and std?

InternetMaster1 commented 4 years ago

Dear @Channingss

I tried to take reference from the "face_detection_demo" library which has some normalisation related code.

https://github.com/PaddlePaddle/Paddle-Lite-Demo/blob/c1470c9179b0da34943b86e9bdfcdb812a01ce52/PaddleLite-android-demo/face_detection_demo/app/src/main/java/com/baidu/paddle/lite/demo/face_detection/preprocess/Preprocess.java#L49

I tried a lot of combinations but nothing worked.

I would truly appreciate if you could exact code snippet that we have to add to the file along with values to make the humanseg_mobile Android demo functional.

Thanks!

Channingss commented 4 years ago

rgb[channelIdx[0]] - mean / std

InternetMaster1 commented 4 years ago

@Channingss

I am still not getting the output.

I changed the code at L51 to the following

Old Code

inputData[y * width + x] = rgb[channelIdx[0]] ;
inputData[y * width + x + channelStride[0]] = rgb[channelIdx[1]] ;
inputData[y * width + x + channelStride[1]] = rgb[channelIdx[2]];

New Code

inputData[y * width + x] = (rgb[channelIdx[0]]-0.5f)/0.5f ;
inputData[y * width + x + channelStride[0]] = (rgb[channelIdx[1]]-0.5f)/0.5f ;
inputData[y * width + x + channelStride[1]] = (rgb[channelIdx[2]]-0.5f)/0.5f;

I took mean and std values as 0.5f based on humanseg_model_ckpt's .yml file

Model: HumanSegMobile
_Attributes:
  eval_metric: {}
  labels: []
  num_classes: 2
_ModelInputsOutputs:
  test_inputs:
  - - image
    - image
  test_outputs:
  - - pred
    - unsqueeze2_0.tmp_0
  - - logit
    - softmax_0.tmp_0
_init_params:
  class_weight: null
  ignore_index: 255
  num_classes: 2
  stage1_num_blocks:
  - 1
  stage1_num_channels:
  - 32
  stage1_num_modules: 1
  stage2_num_blocks:
  - 2
  - 2
  stage2_num_channels:
  - 16
  - 32
  stage2_num_modules: 1
  stage3_num_blocks:
  - 2
  - 2
  - 2
  stage3_num_channels:
  - 16
  - 32
  - 64
  stage3_num_modules: 1
  stage4_num_blocks:
  - 2
  - 2
  - 2
  - 2
  stage4_num_channels:
  - 16
  - 32
  - 64
  - 128
  stage4_num_modules: 1
  sync_bn: true
  use_bce_loss: false
  use_dice_loss: false
status: Quant
test_transforms:
- Resize:
    interp: LINEAR
    target_size:
    - 192
    - 192
- Normalize:
    mean:
    - 0.5
    - 0.5
    - 0.5
    std:
    - 0.5
    - 0.5
    - 0.5
InternetMaster1 commented 4 years ago

Dear @Channingss

Would image normalisation solve the below problem?

The data type of out tensor of the human_segmentation_demo model is int64. Java API does not support int64 yet. It will be fix in the next version of paddlelite.

Originally posted by @zhupengyang in https://github.com/PaddlePaddle/Paddle-Lite-Demo/issues/66#issuecomment-638620745

InternetMaster1 commented 4 years ago

@Channingss

This is the output that I am getting, it is just a thin line of 1 x 192

paddlelite-humansegmobile-output

I noticed one more thing.

Original Demo (based on deeplabv3/mobilenet)

In case of the existing android demo of human segmentation based on deeplabv3/mobilenet, for Code in Visualize.java#L33:

It is going into this condition and the output comes proper.

 if(outputShape.length==3){
            outputImage = Bitmap.createBitmap(objectColor, (int)outputShape[2], (int)outputShape[1], config);
            outputImage = Bitmap.createScaledBitmap(outputImage, inputImage.getWidth(), inputImage.getHeight(),true);
}

These are the values :

outputShape[0] = 1 outputShape[1] = 513 outputShape[2] = 513

Modified Code (based on humanseg_mobile for HRNet)

After replacing the model.nb file, and making changes for image normalisation (as per my earlier message)

It is going into this condition and the output comes incorrect.

if (outputShape.length==4){
            outputImage = Bitmap.createBitmap(objectColor, (int)outputShape[3], (int)outputShape[2], config);
        }

These are the values :

outputShape[0] = 1 outputShape[1] = 192 outputShape[2] = 192 outputShape[3] = 1

Please find the nb file attached : humanseg_mobile_quant.nb.zip

(Renamed to model.nb and kept in assets folder)

InternetMaster1 commented 4 years ago

@Channingss

Based on the above attached mobel.nb file, are you able to run the Android demo on your side for humanseg_mobile?

Channingss commented 4 years ago

rgb[channelIdx[0]]/255-0.5f)/0.5f

InternetMaster1 commented 4 years ago

@Channingss

It is still not working for us.

Old Code:

inputData[y * width + x] = rgb[channelIdx[0]] ;
inputData[y * width + x + channelStride[0]] = rgb[channelIdx[1]] ;
inputData[y * width + x + channelStride[1]] = rgb[channelIdx[2]];

New Code (Trial 1):

inputData[y * width + x] =(rgb[channelIdx[0]]/255-0.5f)/0.5f;
inputData[y * width + x + channelStride[0]] = rgb[channelIdx[1]];
inputData[y * width + x + channelStride[1]] = rgb[channelIdx[2]];

New Code (Trial 2):

inputData[y * width + x] =(rgb[channelIdx[0]]/255-0.5f)/0.5f ;
inputData[y * width + x + channelStride[0]] = (rgb[channelIdx[1]] /255-0.5f)/0.5f ;
inputData[y * width + x + channelStride[1]] = (rgb[channelIdx[2]] /255-0.5f)/0.5f;

What else could I be missing?

InternetMaster1 commented 4 years ago

For updating the prediction library, I copied the pre-compiled files from this page https://paddle-lite.readthedocs.io/zh/latest/user_guides/release_lib.html#android-toolchain-gcc

Arch with_extra arm_stl with_cv download
armv7 ON c++_static ON release/v2.6.1
armv8 ON c++_static ON release/v2.6.1

From the above two, l took these files. Is that correct?

inference_lite_lib.android.armv7\java\so\libpaddle_lite_jni.so inference_lite_lib.android.armv8\java\so\libpaddle_lite_jni.so

And also I took this file inference_lite_lib.android.armv8\java\jar\PaddlePredictor.jar

Or do I need to compile the files myself?

InternetMaster1 commented 4 years ago

@Channingss

Is it possible for you to clone the current human segmentation android demo, and make the required changes as per humanseg_mobile and provide to me?

You are the expert in this domain. I am just scrambling.

It will be useful to the other users of PaddlePaddle too

Channingss commented 4 years ago

@InternetMaster1 ok,I will handle it, give me some time.

InternetMaster1 commented 4 years ago

@Channingss

Thanks a lot

Please check this too https://github.com/PaddlePaddle/Paddle-Lite-Demo/issues/66#issuecomment-639290420

InternetMaster1 commented 4 years ago

@Channingss

Any updates? I would really like to use this amazing library on our Android app...

InternetMaster1 commented 4 years ago

Dear @Channingss,

Any timeline for the Android demo for humanseg_mobile?

Just to update you, I have even been trying to solve the error with @zhupengyang in this issue https://github.com/PaddlePaddle/Paddle-Lite-Demo/issues/66#issuecomment-639290420

We tried to recompile the so and lib files too with additional commits to the Develop branch of PaddleLite, and also did additional changes to the code of the demo in PaddleLite-Demo branch. But still haven't been successful.

@ZeyuChen

InternetMaster1 commented 4 years ago

@Channingss

Any updates? It's been two weeks now....

Channingss commented 4 years ago

@InternetMaster1 hi, you need to modify this code :
if(outputShape.length==4){ outputImage = Bitmap.createBitmap(objectColor, (int)outputShape[3], (int)outputShape[2], config); } to: if (outputShape.length==4){ outputImage = Bitmap.createBitmap(objectColor, (int)outputShape[2], (int)outputShape[1], config); }