ShannonAI / service-streamer

Boosting your Web Services of Deep Learning Applications.
Apache License 2.0
1.22k stars 187 forks source link

wrk压测时候,发生timeout 之后 服务卡死 #73

Closed rubby33 closed 4 years ago

rubby33 commented 4 years ago

请各位帮忙看下,我也浏览过其它issue。 @Meteorix 有空能看下吗?

问题描述:

在wrk压测时候servcie stream包装的服务,发生多次timeout之后,服务卡死,其中某一次timeout报错信息如下:

[2020-06-16 15:35:14,884] ERROR in app: Exception on /sentence_type2 [GET]
Traceback (most recent call last):
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app
response = self.full_dispatch_request()
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request
rv = self.dispatch_request()
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request
return self.view_functionsrule.endpoint
File "service_classification_stream.py", line 69, in predict_sentence_type2
labels = streamer_mid.predict([sentence])
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/service_streamer/service_streamer.py", line 132, in predict
ret = self._output(task_id)
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/service_streamer/service_streamer.py", line 122, in _output
batch_result = future.result(WORKER_TIMEOUT)
File "/data/jiangwei/anaconda3/envs/py3.7/lib/python3.7/site-packages/service_streamer/service_streamer.py", line 41, in result
raise TimeoutError("Task: %d Timeout" % self._id)
TimeoutError: Task: 105 Timeout

wrk 命令:

(py3.7) jiangwei@mk-Z10PE-D16-WS:~$ wrk -t8 -c100 -d20s --latency http://localhost:5005/sentence_type2?sen=%22%E6%88%91%E4%B8%8D%E8%AE%A4%E5%8F%AF%E8%BF%99%E4%B8%AA%E5%9B%BD%E5%AE%B6%22

wrk 压测结果:

(显然不正常) Running 20s test @ http://localhost:5005/sentence_type2?sen=%22%E6%88%91%E4%B8%8D%E8%AE%A4%E5%8F%AF%E8%BF%99%E4%B8%AA%E5%9B%BD%E5%AE%B6%22 8 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 0.00us 0.00us 0.00us -nan% Req/Sec 1.33 2.31 4.00 66.67% Latency Distribution 50% 0.00us 75% 0.00us 90% 0.00us 99% 0.00us 9 requests in 20.03s, 1.26KB read Socket errors: connect 0, read 0, write 0, timeout 9 Requests/sec: 0.45 Transfer/sec: 64.24B

不用service stream包装服务

如果不用service stream包装服务,直接用flask naive方式,表现正常:

Thread Stats Avg Stdev Max +/- Stdev Latency 3.03s 1.37s 6.79s 73.59% Req/Sec 11.50 5.79 40.00 73.35% Latency Distribution 50% 3.35s 75% 3.44s 90% 3.83s 99% 6.48s 777 requests in 20.03s, 108.51KB read Requests/sec: 38.79 Transfer/sec: 5.42KB

尝试的解决方法:

在service stream 包装的服务中,将 # monkey.patch_all() 注释掉 服务不会卡死,但是性能急剧下降

4 threads and 128 connections Thread Stats Avg Stdev Max +/- Stdev Latency 267.58ms 377.21ms 1.85s 90.00% Req/Sec 5.74 3.48 10.00 42.86% Latency Distribution 50% 141.88ms 75% 180.88ms 90% 734.68ms 99% 1.85s 35 requests in 5.01s, 4.89KB read Socket errors: connect 0, read 0, write 0, timeout 5 Requests/sec: 6.98 Transfer/sec: 0.98KB

rubby33 commented 4 years ago

这个问题几乎总是可以复现的。无论在cpu 还是gpu机器。求大佬指点

rubby33 commented 4 years ago

初步结论:将代码在cpu(mac) 机器上,当wrk压测时候service stream服务,报超时,压测结果基本为0。 将代码在Gpu(mac) 机器上,当wrk压测时候service stream服务,正常。

我使用的bert 分类模型。

Homura2333 commented 3 years ago

您好,请问后来解决了吗

kuangdd commented 2 years ago

我把monkey.patch_all()放到import各种库的后面,正常了。