PaddlePaddle / Serving

A flexible, high-performance carrier for machine learning models(『飞桨』服务化部署框架)
Apache License 2.0
893 stars 250 forks source link

uci_housing_client测试报错 #1173

Closed bluestinger closed 7 months ago

bluestinger commented 3 years ago

python3.6 test_client.py uci_housing_client/serving_client_conf.prototxt WARNING: Logging before InitGoogleLogging() is written to STDERR I0425 03:51:49.812734 3687 general_model.cpp:73] feed var num: 1fetch_var_num: 1 I0425 03:51:49.812860 3687 general_model.cpp:77] feed alias name: x index: 0 I0425 03:51:49.812870 3687 general_model.cpp:80] feed[0] shape: I0425 03:51:49.812887 3687 general_model.cpp:84] shape[0]: 13 I0425 03:51:49.812906 3687 general_model.cpp:87] feed[0] feed type: 1 I0425 03:51:49.812930 3687 general_model.cpp:95] fetch [0] alias name: price I0425 03:51:49.813046 3687 general_model.cpp:55] Init commandline: dummy test_client.py --tryfromenv=profile_client,profile_server,max_body_size I0425 03:51:49.813273 3687 predictor_sdk.cpp:34] M default??? d(?????????0:pooled Defaultla" baidu_std ? general_model@baidu.paddle_serving.predictor.general_model.GeneralModelServiceWeightedRandomRender" 100*6 efault_tag_140558042823816list://127.0.0.1:9393 I0425 03:51:49.813410 3687 predictor_sdk.cpp:28] Succ register all components! I0425 03:51:49.813446 3687 config_manager.cpp:217] Not found key in configue: cluster I0425 03:51:49.813465 3687 config_manager.cpp:234] Not found key in configue: split_tag_name I0425 03:51:49.813484 3687 config_manager.cpp:235] Not found key in configue: tag_candidates I0425 03:51:49.813504 3687 config_manager.cpp:263] split info not set, skip... I0425 03:51:49.813534 3687 abtest.cpp:55] Succ read weights list: 100, count: 1, normalized: 100 I0425 03:51:49.813549 3687 config_manager.cpp:202] Not found key in configue: connect_timeout_ms I0425 03:51:49.813560 3687 config_manager.cpp:203] Not found key in configue: rpc_timeout_ms I0425 03:51:49.813578 3687 config_manager.cpp:205] Not found key in configue: hedge_request_timeout_ms I0425 03:51:49.813591 3687 config_manager.cpp:207] Not found key in configue: connect_retry_count I0425 03:51:49.813604 3687 config_manager.cpp:209] Not found key in configue: hedge_fetch_retry_count I0425 03:51:49.813614 3687 config_manager.cpp:211] Not found key in configue: max_connection_per_host I0425 03:51:49.813629 3687 config_manager.cpp:212] Not found key in configue: connection_type I0425 03:51:49.813642 3687 config_manager.cpp:219] Not found key in configue: load_balance_strategy I0425 03:51:49.813652 3687 config_manager.cpp:221] Not found key in configue: cluster_filter_strategy I0425 03:51:49.813663 3687 config_manager.cpp:226] Not found key in configue: protocol I0425 03:51:49.813673 3687 config_manager.cpp:227] Not found key in configue: compress_type I0425 03:51:49.813694 3687 config_manager.cpp:228] Not found key in configue: package_size I0425 03:51:49.813701 3687 config_manager.cpp:230] Not found key in configue: max_channel_per_request I0425 03:51:49.813722 3687 config_manager.cpp:234] Not found key in configue: split_tag_name I0425 03:51:49.813733 3687 config_manager.cpp:235] Not found key in configue: tag_candidates I0425 03:51:49.813750 3687 config_manager.cpp:263] split info not set, skip... I0425 03:51:49.813768 3687 config_manager.cpp:186] Succ load one endpoint, name: general_model, count of variants: 1. I0425 03:51:49.813784 3687 config_manager.cpp:85] Success reload endpoint config file, id: 1 I0425 03:51:49.818302 3687 naming_service_thread.cpp:209] brpc::policy::ListNamingService("127.0.0.1:9393"): added 1 I0425 03:51:49.818620 3687 stub_impl.hpp:376] Succ create parallel channel, count: 3 I0425 03:51:49.818637 3687 stub_impl.hpp:42] Create stub without tag, ep general_model I0425 03:51:49.819711 3687 variant.cpp:69] Succ create default debug I0425 03:51:49.819731 3687 endpoint.cpp:38] Succ create variant: 0, endpoint:general_model I0425 03:51:49.819742 3687 predictor_sdk.cpp:69] Succ create endpoint instance with name: general_model grep: warning: GREP_OPTIONS is deprecated; please use an alias or script WARNING: Logging before InitGoogleLogging() is written to STDERR I0425 03:51:50.029362 3687 dynamic_loader.cc:128] Set paddle lib path : /usr/local/lib/python3.6/site-packages/paddle/libs I0425 03:51:51.851876 3687 init.cc:85] Before Parse: argc is 2, Init commandline: dummy --tryfromenv=check_nan_inf,fast_check_nan_inf,benchmark,eager_delete_scope,fraction_of_cpu_memory_to_use,initial_cpu_memory_in_mb,init_allocated_mem,paddle_num_threads,dist_threadpool_size,eager_delete_tensor_gb,fast_eager_deletion_mode,memory_fraction_of_eager_deletion,allocator_strategy,reader_queue_speed_test_mode,print_sub_graph_dir,pe_profile_fname,inner_op_parallelism,enable_parallel_graph,fuse_parameter_groups_size,multiple_of_cupti_buffer_size,fuse_parameter_memory_size,tracer_profile_fname,dygraph_debug,use_system_allocator,enable_unused_var_check,free_idle_chunk,free_when_no_cache_hit,call_stack_level,sort_sum_gradient,max_inplace_grad_add,use_pinned_memory,cpu_deterministic,use_mkldnn,tracer_mkldnn_ops_on,tracer_mkldnn_ops_off,fraction_of_gpu_memory_to_use,initial_gpu_memory_in_mb,reallocate_gpu_memory_in_mb,cudnn_deterministic,enable_cublas_tensor_op_math,conv_workspace_size_limit,cudnn_exhaustive_search,selected_gpus,sync_nccl_allreduce,cudnn_batchnorm_spatial_persistent,gpu_allocator_retry_time,local_exe_sub_scope_limit,gpu_memory_limit_mb I0425 03:51:51.852031 3687 init.cc:92] After Parse: argc is 1 data (13,) data (1, 13) I0425 03:51:51.937012 3687 general_model.cpp:154] batch size: 1 I0425 03:51:51.937045 3687 stub_impl.hpp:149] Succ thread initialize stub impl! I0425 03:51:51.937055 3687 endpoint.cpp:53] Succ thrd initialize all vars: 1 I0425 03:51:51.937067 3687 predictor_sdk.cpp:129] Succ thrd initialize endpoint:general_model I0425 03:51:51.937382 3687 general_model.cpp:165] fetch general model predictor done. I0425 03:51:51.937393 3687 general_model.cpp:166] float feed name size: 1 I0425 03:51:51.937431 3687 general_model.cpp:167] int feed name size: 0 I0425 03:51:51.937441 3687 general_model.cpp:168] max body size : 536870912 I0425 03:51:51.937449 3687 general_model.cpp:176] prepare batch 0 I0425 03:51:51.937465 3687 general_model.cpp:189] batch [0] int_feed_name and float_feed_name prepared I0425 03:51:51.937479 3687 general_model.cpp:193] tensor_vec size 1 float shape 1 I0425 03:51:51.937496 3687 general_model.cpp:198] prepare float feed x shape size 2 I0425 03:51:51.937513 3687 general_model.cpp:253] batch [0] float feed value prepared I0425 03:51:51.937539 3687 general_model.cpp:339] batch [0] int feed value prepared W0425 03:51:51.940088 3737 redis_protocol.cpp:69] No corresponding PipelinedInfo in socket E0425 03:51:51.940167 3737 input_messenger.cpp:113] A message from 127.0.0.1:9393(protocol=esp) is bigger than 536870912 bytes, the connection will be closed. Set max_body_size to allow bigger messages W0425 03:51:51.940189 3737 input_messenger.cpp:276] Close fd=11 SocketId=1@127.0.0.1:9393@46672: too big data W0425 03:51:51.940340 3687 predictor.hpp:129] inference call failed, message: [E22]1/1 channels failed, fail_limit=1 [C0][E22]Close fd=11 SocketId=1@127.0.0.1:9393@46672: too big data E0425 03:51:51.940464 3687 general_model.cpp:361] failed call predictor with req: insts { tensor_array { float_data: 0.38269556 float_data: -0.11363637 float_data: 0.25525004 float_data: -0.069169961 float_data: 0.25577149 float_data: -0.015833376 float_data: 0.10427496 float_data: -0.17569885 float_data: 0.62828666 float_data: 0.49191383 float_data: 0.18558154 float_data: -0.851919 float_data: 0.051515914 elem_type: 1 shape: 1 shape: 13 } } fetch_var_names: "price" log_id: 0 Traceback (most recent call last): File "test_client.py", line 37, in print('ok', fetch_map["price"]) TypeError: 'NoneType' object is not subscriptable I0425 03:51:51.940901 3687 mmap_allocator.cc:119] PID: 3687, MemoryMapFdSet: set size - 0 I0425 03:51:52.038947 3687 mmap_allocator.cc:119] PID: 3687, MemoryMapFdSet: set size - 0 I0425 03:51:52.040920 3736 socket.cpp:2370] Checking SocketId=0@127.0.0.1:9393

github-actions[bot] commented 3 years ago

Message that will be displayed on users' first issue

bluestinger commented 3 years ago

用curl命令访问服务,可以得到预测结果。

HexToString commented 3 years ago

您好,我猜测您启动的server 传入了--name uci。 当传入了--name 参数后,启动的是webService服务。 而tesy_client代码中,使用的是rpc方式连接,此时需要rpc-server。 请使用Read-Me文档中对应的启动方式。 下个版本会将此处做修改完善。

bluestinger commented 3 years ago

是传入了name,webService, rpc的服务也启动了。

bluestinger commented 3 years ago

from paddle_serving_server.web_service import WebService import numpy as np

class UciService(WebService): def preprocess(self, feed=[], fetch=[]): feed_batch = [] is_batch = True new_data = np.zeros((len(feed), 1, 13)).astype("float32") for i, ins in enumerate(feed): nums = np.array(ins["x"]).reshape(1, 1, 13) new_data[i] = nums feed = {"x": new_data} return feed, fetch, is_batch

uci_service = UciService(name="uci") uci_service.load_model_config("uci_housing_model") uci_service.prepare_server(workdir="workdir", port=9393) uci_service.run_rpc_service() uci_service.run_web_service()

HexToString commented 3 years ago

当您通过--name启动webService时,rpc的port并不是您设置的port,而是12000,如果12000被占用将顺延12001。 web服务的port才是您设置的port号,当web服务接收到您的http请求时,会自动转发给上述后台启动的rpc服务进行实际的预测,所以实际上web服务做了一个转发的事情。 下个版本我们会改进这个设计,使得更加清晰。