PaddlePaddle / PaddleHub

Awesome pre-trained models toolkit based on PaddlePaddle. (400+ models including Image, Text, Audio, Video and Cross-Modal with Easy Inference & Serving)【安全加固,暂停交互,请耐心等待】
https://www.paddlepaddle.org.cn/hub
Apache License 2.0
12.73k stars 2.07k forks source link

启动一个猫狗图片分类的服务 部署成功 POST结果异常? #1225

Open suntao2015005848 opened 3 years ago

suntao2015005848 commented 3 years ago

python环境3.7

传入参数:{"data":["/home/test/image/test_img_dog.jpg"]} 报错: { "msg": "Module image_module is not available.", "results": "", "status": "-1" } 预测代码: @serving def predict(self, data): label_map = self.dataset.label_dict() index = 0

get classification result

    run_states = self.task.predict(data=data)
    results = [run_state.run_results for run_state in run_states]
    prediction = []
    for batch_result in results:
        # get predict index
        batch_result = np.argmax(batch_result, axis=2)[0]
        for result in batch_result:
            index += 1
            result = label_map[result]
            pStr = ("input %i is %s, and the predict result is %s" %
                    (index, data[index - 1], result))
            prediction.append(pStr)

    return prediction

PS:当时跑预测模型时候里面的main函数是可以得到结果的。

suntao2015005848 commented 3 years ago

我将图片转为base64传入还是有该报错出现。
@serving def predict(self, data): label_map = self.dataset.label_dict() index = 0

get classification result

  # 将图片转为base64来传入
    img_data = base64.b64decode(data)
    # 注意:如果是"data:image/jpg:base64,",那你保存的就要以png格式,如果是"data:image/png:base64,"那你保存的时候就以jpg格式。
    with open('002x.png', 'wb') as f1:
        f1.write(img_data)
    test_data = ["002x.png"]
    run_states = self.task.predict(data=test_data)
    results = [run_state.run_results for run_state in run_states]
    prediction = []
    for batch_result in results:
        # get predict index
        batch_result = np.argmax(batch_result, axis=2)[0]
        for result in batch_result:
            index += 1
            result = label_map[result]
            pStr = ("input %i is %s, and the predict result is %s" %
                    (index, test_data[index - 1], result))
            prediction.append(pStr)

    return prediction
ShenYuhan commented 3 years ago

这个错误是因为请求模型不存在造成的,你启动serving之后打印的日志贴一下

suntao2015005848 commented 3 years ago

1、 hub list image_module 打印模型存在

+---------------------------+----------------------------------------------------+ | ModuleName | Path | +---------------------------+----------------------------------------------------+ | image_module | /root/.paddlehub/modules/image_module |

2、执行 hub serving start --modules image_module --port 8871 --use_gpu /usr/local/python3.7/lib/python3.7/site-packages/pandas/compat/init.py:97: UserWarning: Could not import the lzma module. Your installed Python is incomplete. Attempting to use lzma compression will result in a RuntimeError. warnings.warn(msg) /usr/local/python3.7/lib/python3.7/site-packages/paddle/fluid/executor.py:811: UserWarning: There are no operators in the program to be executed. If you pass Program manually, please use fluid.program_guard to ensure the current Program is being used. warnings.warn(error_info) W0203 15:54:15.424532 1422 device_context.cc:237] Please NOTE: device: 0, CUDA Capability: 61, Driver API Version: 11.2, Runtime API Version: 9.0 W0203 15:54:15.428323 1422 device_context.cc:245] device: 0, cuDNN Version: 7.6. image_module == 1.0.0

3、执行脚本

4、访问后打印结果 Debug mode: off 2021-02-03 15:54:16,451-INFO: Running on http://0.0.0.0:8871/ (Press CTRL+C to quit) 2021-02-03 16:00:36,351-INFO: 192.168.22.178 - - [03/Feb/2021 16:00:36] "POST /predict/image_module HTTP/1.1" 200 - 2021-02-03 16:00:38,272-INFO: 192.168.22.178 - - [03/Feb/2021 16:00:38] "POST /predict/image_module HTTP/1.1" 200 -

和返回结果 { "msg": "Module image_module is not available.", "results": "", "status": "-1" }

5、贴上整个代码

-- coding: utf8 --

import base64

from paddlehub.module.module import moduleinfo, serving import numpy as np import paddlehub as hub import os

@moduleinfo( name="image_module", version="1.0.0", summary="ImageClassificationwhich was fine-tuned on the DogCat dataset.", author="suntao", author_email="", type="ImageClassification/mobilenet_v2_imagenet") class ImageCollages(hub.Module):

def _initialize(self):
    module = hub.Module(name="mobilenet_v2_imagenet")
    self.dataset = hub.dataset.DogCat()
    data_reader = hub.reader.ImageClassificationReader(
        image_width=module.get_expected_image_width(),
        image_height=module.get_expected_image_height(),
        images_mean=module.get_pretrained_images_mean(),
        images_std=module.get_pretrained_images_std(),
        dataset=self.dataset)

    config = hub.RunConfig(
        use_cuda=True,
        num_epoch=1,
        checkpoint_dir="image_demo",
        batch_size=64,
        eval_interval=50,
        strategy=hub.finetune.strategy.DefaultFinetuneStrategy())

    input_dict, output_dict, program = module.context(trainable=True)

    img = input_dict["image"]

    feature_map = output_dict["feature_map"]

    feed_list = [img.name]

    self.task = hub.ImageClassifierTask(

data_reader=data_reader, feed_list=feed_list, feature=feature_map, num_classes=self.dataset.num_labels, config=config)

    self.task.finetune_and_eval()

@serving
def predict(self, data):
    label_map = self.dataset.label_dict()
    index = 0
    # get classification result

    img_data = base64.b64decode(data)
    # 注意:如果是"data:image/jpg:base64,",那你保存的就要以png格式,如果是"data:image/png:base64,"那你保存的时候就以jpg格式。
    with open('002x.png', 'wb') as f1:
        f1.write(img_data)
    test_data = ["002x.png"]
    run_states = self.task.predict(data=test_data)
    results = [run_state.run_results for run_state in run_states]
    prediction = []
    for batch_result in results:
        # get predict index
        batch_result = np.argmax(batch_result, axis=2)[0]
        for result in batch_result:
            index += 1
            result = label_map[result]
            pStr = ("input %i is %s, and the predict result is %s" %
                    (index, test_data[index - 1], result))
            prediction.append(pStr)
    return prediction

Main函数在预训练模型的时候是可以得到结果的

if name == "main": ernie_tiny = ImageCollages() predictions = ernie_tiny.predict(data=data) print(predictions)

6、通过以下脚本方式也是可以得到结果的,唯独启动服务 import paddlehub as hub

cat_dog = hub.Module(name="image_module") data = '这里是base64' res = cat_dog.predict(data) print(res)

suntao2015005848 commented 3 years ago

这是python脚本 同样报该错误

import requests import json

if name == "main":

指定用于预测的文本并生成字典{"text": [text_1, text_2, ... ]}

# 以key的方式指定text传入预测方法的时的参数,此例中为"data"
# 对应本地部署,则为lac.analysis_lexical(texts=[text1, text2])
data = 'base64'
# 指定预测方法为lac并发送post请求
url = "http://xxxxxx:8871/predict/image_module"
# 指定post请求的headers为application/json方式
headers = {"Content-Type": "application/json"}

r = requests.post(url=url, headers=headers, data=json.dumps(data))

# 打印预测结果
print(json.dumps(r.json(), indent=4, ensure_ascii=False))
ShenYuhan commented 3 years ago

可以给下这个模型,我们这边复现一下吗

Cy6er7um commented 3 years ago

提供下模型

suntao2015005848 commented 3 years ago

https://github.com/suntao2015005848/imageModel.git

suntao2015005848 commented 3 years ago

我项目推到github上面了。

suntao2015005848 commented 3 years ago

请问该问题是否复现

suntao2015005848 commented 3 years ago

@ShenYuhan

ShenYuhan commented 3 years ago

请问使用的paddlepaddle和paddlehub的版本分别是?

suntao2015005848 commented 3 years ago

paddlehub 1.6.2
paddlepaddle-gpu 1.7.2.post97 @ShenYuhan

ShenYuhan commented 3 years ago

这个版本的paddlehub在启动serving后,可以打开浏览器的本地页面,查看有什么模型供使用,你打开浏览器看下能看到使用的模型吗,地址为0.0.0.0:8866

suntao2015005848 commented 3 years ago

启动serving后浏览器内看不到该的模型。其他模型也如此,我试着启动lac,浏览器也看不到相关信息,但是lac模型是可使用的。我的模型是在centos-release-7-9.2009.1.el7.centos.x86_64 服务器上部署 , 通过 ip+端口 来查看浏览器的信息的 @ShenYuhan

suntao2015005848 commented 3 years ago

加啦,麻烦了

suntao2015005848 commented 3 years ago

经大佬指点问题已经解决: 导致该问题的主要原因就是在module.py里@moduleinfo的type参数错误导致的,type应该是 nlp/xxx 或者 cv/xxx 修改 type为 cv/classification 后重新启动模型,部署成功,不在报相关错误。