bingoohuang / blog

write blogs with issues
MIT License
177 stars 23 forks source link

OpenResty最佳实践 #121

Open bingoohuang opened 4 years ago

bingoohuang commented 4 years ago

from 又拍云服务优化实践--张超

HTTP headers,

local user_agent = ngx.req.get_headers()["User-Agent"]
local cookie = ngx.req.get_headers().Cookie
local x_forwarded_for = ngx.req.get_headers()["X-Fordwarded-For"]

ngx.req.get_headers() will return all the request headers. Key (names) are all lowercase.

Use lowercase name to index header value.

local req_headers = ngx.req.get_headers()
setmetatable(req_headers, nil)
local user_agent = req_headers["user-agent"]
local cookie = req_headers.cookie
local x_forwarded_for = req_headers["x-forwarded-for"]

access log

http {
    access_log logs/error.log main buffer=4096;
}

Good Programming Habits

💡 avoid overusing global variables 💡 avoid inefficient string concatenations 💡 avoid too much table resize 💡 use lua-resty-core 💡 use JIT-compiled functions 💡 use ffi to call your C functions

http://wiki.luajit.org/NYI https://blog.codingnow.com/cloud/LuaTips

SSL Acceleration

#openssl speed -elapsed rsa2048
You have chosen to measure elapsed time instead of user CPU time.
Doing 2048 bits private rsa's for 10s: 16640 2048 bits private RSA's in 10.00s
Doing 2048 bits public rsa's for 10s: 332804 2048 bits public RSA's in 10.00s
                  sign    verify    sign/s verify/s
rsa 2048 bits 0.000601s 0.000030s   1664.0  33280.4

#openssl speed -elapsed -engine qat -async_jobs 72 rsa2048
engine "qat" set.
You have chosen to measure elapsed time instead of user CPU time.
Doing 2048 bits private rsa's for 10s: 179738 2048 bits private RSA's in 10.01s
Doing 2048 bits public rsa's for 10s: 2213458 2048 bits public RSA's in 10.00s
                  sign    verify    sign/s verify/s
rsa 2048 bits 0.000056s 0.000005s  17955.8 221345.8

without-ssl-acceleration# ss -tn sport = :443 | wc -l
32863
with-ssl-acceleration# ss -tn sport = :443 | wc -l
34689
bingoohuang commented 4 years ago

Nginx 的日志级别

日志级别 含义
ngx.STDERR 标准输出
ngx.EMERG 紧急报错
ngx.ALERT 报警
ngx.CRIT 严重,系统故障,触发运维告警系统
ngx.ERR 错误,业务不可恢复性错误
ngx.WARN 告警,业务中可忽略错误
ngx.NOTICE 提醒,业务比较重要信息
ngx.INFO 信息,业务琐碎日志信息,包含不同情况判断等
ngx.DEBUG 调试
bingoohuang commented 4 years ago
  1. ngx.worker.id

    syntax: count = ngx.worker.id()

    context: set_by_lua, rewrite_by_lua, access_by_lua, content_by_lua, header_filter_by_lua, body_filter_by_lua, log_by_lua, ngx.timer., init_worker_by_lua*

    Returns the ordinal number of the current Nginx worker processes (starting from number 0).

    So if the total number of workers is N, then this method may return a number between 0 and N - 1 (inclusive).

  2. ngx.worker.pid

    syntax: pid = ngx.worker.pid()

    context: set_by_lua, rewrite_by_lua, access_by_lua, content_by_lua, header_filter_by_lua, body_filter_by_lua, log_by_lua, ngx.timer., init_by_lua, init_worker_by_lua

    This function returns a Lua number for the process ID (PID) of the current Nginx worker process. This API is more efficient than ngx.var.pid and can be used in contexts where the ngx.var.VARIABLE API cannot be used (like init_worker_by_lua).

  3. 中文说明

    函数 ngx.worker.count 可以获取当前的 worker 进程数量,其实就是配置文件里指令 “worker_processes” 设定的值。

    函数 ngx.worker.pid 获取当前 worker 进程的进程号,即 pid,可以把它用作进程的标识。但需要注意,因为 worker 进程可能会因为 reload 或者 crash 而重启,所以 pid 可能会变化,不能作为进程的唯一标识。

    函数 ngx.worker.id 是 OpenResty 服务器内部的标识序号,每个 worker 进程都会分配一个从 0 到 ngx.worker.count() -1 的唯一整数,即使 reload 或者 crash 也不会变化,所以可以使用它作为进程的唯一标示,让某个功能、阶段或定时任务只在第 0 号或第 x 号进程里执行(或不执行)。

  4. 如何只启动一个 timer 工作?

    init_worker_by_lua_block {
         local delay = 3  -- in seconds
         local new_timer = ngx.timer.at
         local log = ngx.log
         local ERR = ngx.ERR
         local check
    
         check = function(premature)
             if not premature then
                 -- do the health check or other routine work
                 local ok, err = new_timer(delay, check)
                 if not ok then
                     log(ERR, "failed to create timer: ", err)
                     return
                 end
             end
         end
    
         if 0 == ngx.worker.id() then
             local ok, err = new_timer(delay, check)
             if not ok then
                 log(ERR, "failed to create timer: ", err)
                 return
             end
         end
    }
bingoohuang commented 4 years ago

Nginx中的Worker进程内部缓存刷新确认

问题:

每个worker都有自己的进程内部缓存,我怎么验证,每个worker内部的缓存目前都已经刷新了呢?有没有办法发起一个指定worker id的请求,这个请求根据worker id的请求标识,只分配给指定的worker处理呢?带着这个想法,检索了一下Nginx的Master-Worker处理模型

Nginx中,Worker是怎么获取连接的呢?

image

图片来自nginx架构模型分析

master进程先建好需要listen的socket后,然后再fork出多个woker进程,这样每个work进程都可以去accept这个socket。当一个client连接到来时,所有accept的work进程都会受到通知,但只有一个进程可以accept成功,其它的则会accept失败。Nginx提供了一把共享锁accept_mutex(基于共享内存实现ngx_trylock_accept_mutex)来保证同一时刻只有一个work进程在accept连接,从而解决惊群问题。

image 图片来自Inside NGINX: How We Designed for Performance & Scale

切换方案

从模型来看,指定worker-id请求到指定的worker,不可行。 因此,指定多次发送请求,直到请求被指定的worker处理为止(非指定worker接收到请求后直接拒绝处理即可)。

  1. 运行openresty的docker容器
  2. docker exec -it nginx bash
# docker run -d --name="nginx" -p 8100:80 -v $PWD/docker-nginx/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf:ro -v $PWD/docker-nginx/logs:/usr/local/openresty/nginx/logs openresty/openresty
# curl "http://127.0.0.1:8100/worker?worker=0"
# for ((i=0; i < 1000; ++i )); do curl -H 'Connection: close' "http://127.0.0.1:8100/worker?worker=1"; done

nginx.conf:

worker_processes          10;
error_log                 logs/error.log;

events {
    worker_connections     20;
}

http {
    server {
        listen                   80;

        location / {
            default_type            text/html;
            content_by_lua_block {
                ngx.say("hello world")
            }
        }

         location /worker {
            default_type "application/json; charset=utf-8";

            content_by_lua_block {
                local state = "yes"
                local id = ngx.worker.id()
                local pid = ngx.worker.pid()
                local count = ngx.worker.count()

                if tonumber(ngx.var.arg_worker) ~= id then
                    state = "no"
                end

                ngx.say(string.format(
                    [[{"state": "%s", "req": %s, "worker":%d, "pid":%s, "count":%d}]],
                    state, ngx.var.arg_worker, id, pid, count ))
            }
        }
    }
}

使用gobench来压测一下,看看能不能请求到:

$ gobench -u "http://127.0.0.1:8200/worker?worker=3" -p=0  -c=15 -r=1 -t=15
Dispatching 15 threads (goroutines)
Waiting for results...
success [200] {"state": "no", "req": 3, "worker":6, "pid":13, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":2, "pid":9, "count":10}
success [200] {"state": "yes", "req": 3, "worker":3, "pid":10, "count":10}
success [200] {"state": "no", "req": 3, "worker":1, "pid":8, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":6, "pid":13, "count":10}
success [200] {"state": "yes", "req": 3, "worker":3, "pid":10, "count":10}
success [200] {"state": "no", "req": 3, "worker":1, "pid":8, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}
success [200] {"state": "no", "req": 3, "worker":0, "pid":7, "count":10}

Requests:                           15 hits
Successful requests:                15 hits
Network failed:                      0 hits
Bad requests failed (!2xx):          0 hits
Successful requests rate:          923 hits/sec
Read throughput:               228 KiB/sec
Write throughput:               91 KiB/sec
Test time:                  16.246606ms
bingoohuang commented 4 years ago

NGINX Tuning For Best Performance

bingoohuang commented 4 years ago

从自建 API Gateway 到深入体验 Apache APISIX

我们早期是直接改 nginx.conf,我自己觉得裸的 nginx 性能肯定是最高的。但问题是很多人不一定记得 Location 配制的优先级顺序规则,我们也经常会改错。而且我们的需求比较固定:动态更新 SSL 证书、Location、upstream,当时的做法类似现在的 K8S 的 ingress 更新机制,即通过模本生成:nginx_template.conf+JSON -> PHP -> nginx.conf -> PHP-cli> Reload 来实现动态化。但这个方案在遇到 Apache APISIX 之后可以考虑替换掉了。

bingoohuang commented 3 years ago

编译zlib.so

openresty在编译安装的时候就加入了lua支持,所以无需再对nginx进行改造。

但lua下对gzip进行解压,需要借助一个库:[lua- zlib[(https://github.com/brimworks/lua-zlib)。

lua是一个和C语言结合紧密的脚本语言,实际上lua-zlib就是一个C语言编写的库,我们现在需要做的就是将其编译成一个动态链接库 zlib.so,让lua来引用。

  1. 下载最新的lua-zlib https://codeload.github.com/brimworks/lua-/zip/master
  2. 执行cmake来生成编译配置文件。系统若提示没有cmake的命令的话,请用yum安装:yum install cmake
  3. make linux
[root@ecs-dd59 ~]# unzip lua-zlib-master.zip
[root@ecs-dd59 ~]# cd lua-zlib-master
[root@ecs-dd59 lua-zlib-master]# yum install cmake
[root@ecs-dd59 lua-zlib-master]# cmake -DLUA_INCLUDE_DIR=/home/footstone/openresty/luajit/include/luajit-2.1/ -DLUA_LIBRARIES=/home/footstone/openresty/luajit/lib  -DUSE_LUAJIT=ON -DUSE_LUA=OFF
-- The C compiler identification is GNU 4.8.5
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Found LuaJIT: /home/footstone/openresty/luajit/lib (found version "2.1.0-beta3")
-- Found ZLIB: /usr/lib64/libz.so (found version "1.2.7")
-- Configuring done
-- Generating done
-- Build files have been written to: /root/lua-zlib-master
[root@ecs-dd59 lua-zlib-master]# make
Scanning dependencies of target cmod_zlib
[ 50%] Building C object CMakeFiles/cmod_zlib.dir/lua_zlib.c.o
[100%] Linking C shared module zlib.so
[100%] Built target cmod_zlib
[root@ecs-dd59 lua-zlib-master]# cp zlib.so  /home/footstone/openresty/lualib/
[root@ecs-dd59 lua-zlib-master]# ls  -lh /home/footstone/openresty/lualib/
total 200K
-rwxr-xr-x 1 root root 163K Nov 12 20:09 cjson.so
-rwxr-xr-x 1 root root  71K Nov 12 20:09 librestysignal.so
drwxr-xr-x 3 root root 4.0K Nov 12 20:09 ngx
drwxr-xr-x 2 root root 4.0K Nov 12 20:09 rds
drwxr-xr-x 2 root root 4.0K Nov 12 20:09 redis
drwxr-xr-x 8 root root 4.0K Nov 12 20:09 resty
-rw-r--r-- 1 root root 1.4K Nov 12 20:09 tablepool.lua
-rwxr-xr-x 1 root root  72K Nov 18 11:00 zlib.so

系统信息

[root@ecs-dd59 lua-zlib-master]# cat /proc/version
Linux version 4.18.0-80.7.2.el7.aarch64 (mockbuild@aarch64-01.bsys.centos.org) (gcc version 8.3.1 20190311 (Red Hat 8.3.1-3) (GCC)) #1 SMP Thu Sep 12 16:13:20 UTC 2019
[root@ecs-dd59 lua-zlib-master]# uname -a
Linux ecs-dd59 4.18.0-80.7.2.el7.aarch64 #1 SMP Thu Sep 12 16:13:20 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux

源码安装cmake编译器

  1. yum install -y gcc gcc-c++ make automake
  2. wget http://www.cmake.org/files/v2.8/cmake-2.8.10.2.tar.gz
  3. tar -zxvf cmake-2.8.10.2.tar.gz
  4. cd cmake-2.8.10.2
  5. ./bootstrap
  6. gmake
  7. gmake install
  8. 检查安装 cmake --version

下载lua-zlib包,并解压

  1. unzip lua-zlib-master.zip
  2. cd /usr/local/software/lua-zlib-master
  3. cmake -DLUA_INCLUDE_DIR=/usr/local/openresty/luajit/include/luajit-2.1 -DLUA_LIBRARIES=/usr/local/openresty/luajit/lib -DUSE_LUAJIT=ON -DUSE_LUA=OFF
  4. make
  5. cp zlib.so /usr/local/openresty/lualib/zlib.so

使用zlib

local zlib = require("zlib")
local stream = zlib.inflate()
-- resp.body是指压缩的内容,r是返回解压后的内容
local r=stream(resp.body);
bingoohuang commented 3 years ago

CentOS 7 安装 openresty

$ cat  /etc/redhat-release
CentOS Linux release 8.2.2004 (Core)
$ sudo yum install pcre-devel openssl-devel gcc curl
$ curl -LO https://openresty.org/download/openresty-1.19.3.1.tar.gz
$ ./configure --prefix=/home/d5k/run/openresty
$ gmake -j8
  1. -j 选项表示并行编译。make -j8,让 make 最多允许8个编译命令同时执行,这样可以更有效的利用CPU资源。
  2. CentOS 7 安装 openresty
bingoohuang commented 2 years ago

猜测:proxy_pass 如果不是指定 upstream 名称,则会默认创建一个匿名的 upstream,源码验证

image

bingoohuang commented 2 years ago

使用 docker 来验证 nginx 配置

神秘来电

家乡老话说:“冬风不过犁,春风钻牛皮”。周五早上没看天气,上班路上直接被春风吹得“钻了牛皮”,上午低烧并且附送全身发酸,下午没再坚持请假躺尸战病,躺到第 2 天星期六还不见太好又继续躺,到傍晚稍好点同时也感觉躺多了,这时候来了一个公司同事 F 君的求助电话,于是干脆“接单”干活,当回“快递小哥”吧,有助于身体康复。

背景调查

A 局有一万多台终端,都安装有证书管理客户端,本来跑得顺顺溜溜的,结果突然因为信创服务升级成 https 了,而且证书更新服务的网页地址也变化了,要是一台台去升级客户端,去改服务器地址的话,那就是下下策无奈之举了。幸好,A 局当时配置有 Nginx 中转服务,一万多台终端客户端的更新服务地址是配置到中转 Nginx 服务地址上,如果只是在中转服务的 Nginx 配置上调整一下就能搞定,那就是“妥妥的”好办法。问题是,不知道怎么配!

我画了一个示意图:

image

(本图使用 https://excalidraw.com/  在线绘制)

问题分析

图形一出,有点像 Nginx 正向代理的场景啊。从技术背景描述上来说,思路是可行的。问题是,不知道具体怎么配置。还好,凭着曾倒腾过 Nginx,“洗”过其码,“撸”过其插件,对 Nginx 有点小经验。

Nginx 是俄国人 Igor 创造的著名反向代理开源项目,后来开了家公司 Nginx Plus,开了几年竟被 F5 收购了,也不知道最近这个局势,西方有没有制裁 Nginx。

话题转正,继续是技术上是可行的,那到底具体如何配置呢?

我刚开始也是,大概想了一下,远程指导了一会,发现效率太低,一方面我发的配置会有笔误,现场几乎不理解配置(对 URL 地址肯定也不理解),也就连“姓名一起抄”了,导致各种返工;另外一方面,修正一个配置,据说好像要从这台电脑拷贝到那台电脑(没弄明白),反正是 5 分钟以上反应级别,最后磨得我“病”都明显好得快多了,感觉好像全身没那么酸了。于是,我干脆还是自己搭建一个验证环境吧,根据现场环境,做一下配置验证,省得现场验证导致反复。

本文主要就是分享一下,怎么快速的构建 Nginx 的验证环境,以后有需要的时候,可以快速验证,验证好了配置后,再发往现场(因为现场可能各种网络原因,上网不便利,拷贝配置可能只能手工输入,或者要进行网络切换)。

我的方法,就是一行命令啦:

docker run -p 1180:80 -p 1181:81 -v $(pwd)/nginx.conf:/usr/local/openresty/nginx/conf/nginx.conf openresty/openresty:alpine

events {
    worker_connections 4096;
}

http {
    server {
        listen 80;

        location = /web/index.html {
            # 模拟通知浏览器跳转到新的地址(这里 302 跳转支持 # 这种形式)
            rewrite ^ http://127.0.0.1:1181/#/index?channelId=k0= redirect;
        }

        # 因为 /web/update1.html 返回网页内 JS/CSS 都是相对路径,所以要对其请求进行 URL 改写,去掉 /web前缀 
        location ~ /web/(.*) {
            set $p1 $1;
            rewrite ^ /$p1;
        }

       # # 中转服务,反向代理更新服务
        location / {
            # declar token is ""(empty str) for original request without args,
            # because $is_args concat any var will be `?`
            set $token "";

            # if the request has args update token to "&"
            # $is_args “?” if a request line has arguments, or an empty string otherwise
            # http://nginx.org/en/docs/http/ngx_http_core_module.html
            if ($is_args) {
                set $token "&";
            }

            # update original append custom params with $token
            # if no args $is_args is empty str,else it's "?"
            set $args cid=P0%3D$token$args;
            proxy_pass http://127.0.0.1:81;
        }
    }

    server {
        listen 81;

        location / {
            echo $request;
            echo $arg_channelId;
        }
    }
}
[2022-03-12 21:12:06.675] ❯ gurl :1180/web/index.html
Conn-Session: 127.0.0.1:54347->127.0.0.1:1180 (reused: false, wasIdle: false, idle: 0s)
GET /web/index.html? HTTP/1.1
Host: 127.0.0.1:1180
Accept: application/json
Accept-Encoding: gzip, deflate
Content-Type: application/json
Gurl-Date: Sat, 12 Mar 2022 13:12:52 GMT
User-Agent: gurl/1.0.0

HTTP/1.1 200 OK
Server: openresty/1.15.8.2
Date: Sat, 12 Mar 2022 13:12:52 GMT
Content-Type: text/plain
Connection: keep-alive

GET /?cid=k0%3D HTTP/1.0
k0%3D

配置弄好,docker 启动好,就可以测试了,配置不对,修改配置,reload 服务,速度都可以是秒级的啦快多了。

关键技术点

新的地址中 https://1.2.3.4/update2#/index?cid=p0=,除了升级成 https 之外,还有#/index?cid=p0=这么一段。这个需要注意,URL 中以 # 号开头的部分,仅5是浏览器端使用,在浏览器请求时,是不会把 # 号后部分传递给后端服务的,如果要继续学习,请自行查找相关规范(例如百度 URL SHARP MEANING 即可),这里不展开。 根据上面的技术点,以及确实证实 JS 中有对 # 部分的使用,因此也只能在 Nginx 配置成 302 跳转的形式了,其实限制了 Nginx 的其它使用形式。(反思,更新服务判断 cid 是否存在的逻辑应该优化一下,不应该只是 JS 上单纯判断,应该有一次请求后端服务校验的机会,Nginx在反代时补充上,也应该是可以的,这样部署就可以更加灵活一些。)

Nginx 配置,本质上,我也是先查,在这个层面上,我跟大家是同一个水平面,或许我多一点的是知道怎么查罢了。还有,多点总结罢了。

遗漏问题

一万多台终端连着一台 Nginx 服务器,“单吊一枝花”啊,Nginx 服务器单点故障了,一万多台终端上的服务都要受影响,不是高可用解决方案。或许这个升级业务本身,不需要高可用吧。

bingoohuang commented 2 years ago

复习一下,ABC 类 IP 地址

所以,最短的私有 IP 是 10.0.0.0,1.2.3.4 竟然是个公网 IP.

image

分类 前缀码 开始地址 结束地址 对应CIDR修饰 默认子网掩码 网络数 每个网络的主机数 私有地址段
A类地址 0 0.0.0.0 127.255.255.255 /8 255.0.0.0 128 16,777,214 10.0.0.0--10.255.255.255
B类地址 10 128.0.0.0 191.255.255.255 /16 255.255.0.0 16,384 65,534 172.16.0.0--172.31.255.255 ,191.255.255.255是广播地址,不能分配
C类地址 110 192.0.0.0 223.255.255.255 /24 255.255.255.0 2,097,152 254 192.168.0.0--192.168.255.255
D类地址(群播 1110 224.0.0.0 239.255.255.255 /4 未定义 未定义 未定义 -
E类地址 (保留) 1111 240.0.0.0 255.255.255.255 /4 未定义 未定义 未定义 未定义 -
  1. 分类网络
  2. 192.168.和10.0.开头的IP、内网IP段、IP简介、分类——(IP观止)
bingoohuang commented 2 years ago

error_log 到底影响多少性能

本测试案例,显示 写日志 降低了约 50% 性能。

|分类|TPS

---|---|--- 1|不写日志|76838.432 2| 写日志|39279.722

openresty

https://openresty.org/download/openresty-1.19.9.1.tar.gz

environment

# Hostname Uptime Uptime Human Procs OS Platform Host ID Platform Version Kernel Version Kernel Arch Os Release
1 VM-24-15-centos 11729502 4 months 115 linux centos 864dd98a-12e6-426f-8d2f-46dc598b947c 8.2.2004 4.18.0-305.3.1.el8.x86_64 x86_64 NAME="CentOS Linux" VERSION="8 (Core)"
# Total Free Available Used Percentage
1 3.649GiB 128.3MiB 2.856GiB 14.56%
# Physical ID Vendor ID Family Model Name Cores Mhz
1 0 AuthenticAMD 23 AMD EPYC 7K62 48-Core Processor 2 2595.124

nginx.conf

events { worker_connections 4096; }

http {
    server {
        listen 8011;
        access_log off;
        location /nolog {
            content_by_lua_block {
                ngx.header['Content-Type'] = 'application/json; charset=utf-8'
                ngx.say((require "cjson").encode({ k='key', v='nolog' }))
            }
        }
        location /log {
            # https://docs.nginx.com/nginx/admin-guide/monitoring/logging/
            error_log logs/error.log warn;
            content_by_lua_block {
                ngx.log(ngx.ERR, "request", ngx.now(), "from", ngx.var.uri)
                ngx.header['Content-Type'] = 'application/json; charset=utf-8'
                ngx.say((require "cjson").encode({ k='key', v='log' }))
            }
        }
    }
}

berf test

nolog

[d5k@VM-24-15-centos ~]$ berf :8011/nolog -d5m
Berf benchmarking http://127.0.0.1:8011/nolog for 5m0s using 100 goroutine(s), 2 GoMaxProcs.

Summary:
  Elapsed                   5m0s
  Count/RPS   23051532 76838.432
    200       23051532 76838.432
  ReadWrite  133.976 99.583 Mbps

Statistics    Min       Mean    StdDev     Max
  Latency     17µs    1.294ms   1.88ms   87.978ms
  RPS       62879.71  76830.66  2441.71  80305.15

Latency Percentile:
  P50      P75      P90      P95      P99      P99.9     P99.99
  958µs  1.391ms  1.912ms  2.783ms  10.794ms  21.468ms  31.439ms

log

[d5k@VM-24-15-centos ~]$ berf :8011/log -d5m
Berf benchmarking http://127.0.0.1:8011/log for 5m0s using 100 goroutine(s), 2 GoMaxProcs.

Summary:
  Elapsed                  5m0s
  Count/RPS  11783919 39279.722
    200      11783919 39279.722
  ReadWrite  67.860 50.278 Mbps

Statistics    Min       Mean    StdDev      Max
  Latency     49µs    2.542ms   1.094ms  289.537ms
  RPS       29251.33  39276.55  1282.84  40837.98

Latency Percentile:
  P50        P75      P90      P95     P99     P99.9     P99.99
  2.444ms  2.587ms  2.762ms  2.946ms  5.51ms  10.556ms  16.525ms
bingoohuang commented 2 years ago

ngx.print 是异步的

This is an asynchronous call and will return immediately without waiting for all the data to be written into the system send buffer. To run in synchronous mode, call ngx.flush(true) after calling ngx.print. This can be particularly useful for streaming output. See ngx.flush for more details.

https://github.com/openresty/lua-nginx-module#ngxprint

ngx.say 与 ngx.print 均为异步输出。首先需要明确一下的,是这两个函数都是异步输出的,也就是说当调用 ngx.say 后并不会立刻输出响应体。

https://moonbingbing.gitbooks.io/openresty-best-practices/content/openresty/response.html

测试代码:

local resty_random = require "resty.random"
local str = require "resty.string"
-- https://github.com/starius/lua-filesize
local filesize = require 'resty.core.filesize'
local times = tonumber(ngx.var.arg_n or 1000)
local t = {}

for i = 1, times do
    -- generate 16 bytes of pseudo-random data
    table.insert(t, "pseudo-random: " .. str.to_hex(resty_random.bytes(16)))
end

local body = table.concat(t, "\n") .. "\n"

ngx.update_time()
local begin = ngx.now()

ngx.print(body)

ngx.update_time()
local seconds = ngx.now() - begin

ngx.log(ngx.ERR, "size: ", filesize(#body), " ,elapsed: ", seconds, "s")

-- ngx.print 与 ngx.say 的区别
-- ngx.print("a") -- 结束没有换行符
-- ngx.say("a")  -- 结束多一个换行符

wecom-temp-5f6973201571af71c5b4426015022a88

bingoohuang commented 2 years ago
local P = {}

local resty_random = require "resty.random"
local str = require "resty.string"
-- https://github.com/starius/lua-filesize
local filesize = require 'resty.core.filesize'
local cjson = require "cjson.safe"
cjson.encode_sparse_array(true)

local function now()
    ngx.update_time()
    return ngx.now()
end

function P.test()
    local times = tonumber(ngx.var.arg_n or 1000)
    local t = {}

    for i = 1, times do
        -- generate 160 bytes of pseudo-random data
        table.insert(t, [["pseudo-random]] .. tostring(i) .. [[":"]] ..
                         str.to_hex(resty_random.bytes(160)) .. [["]])
    end

    local body = [[{"status": 200, "value":{]] .. table.concat(t, ",") .. [[}}]]

    local t1 = now()
    local body_json = cjson.decode(body)
    local t2 = now()
    ngx.print(body)
    local t3 = now()

    ngx.log(ngx.ERR, "size: ", filesize(#body), " ,elapsed: ", (t2 - t1), "/",
            (t3 - t2), ", status: ", body_json and body_json.status or "nil")
end

-- ngx.print 与 ngx.say 的区别
-- ngx.print("a") -- 结束没有换行符
-- ngx.say("a")  -- 结束多一个换行符

return P

gurl k.co/cost n==99999 -n100 -c10

2022/04/13 17:49:56 [error] 1135154#1135154: *735 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.2810001373291/0.0039999485015869, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
2022/04/13 17:49:57 [error] 1135154#1135154: *737 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.28500008583069/0.005000114440918, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
2022/04/13 17:49:58 [error] 1135154#1135154: *738 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.28399991989136/0.005000114440918, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
2022/04/13 17:49:58 [error] 1135154#1135154: *739 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.28299999237061/0.0049998760223389, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
2022/04/13 17:49:59 [error] 1135154#1135154: *740 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.27999997138977/0.0039999485015869, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
2022/04/13 17:50:00 [error] 1135154#1135154: *741 [lua] test1.lua:33: test(): size: 32.8 MB ,elapsed: 0.28600001335144/0.0049998760223389, status: 200, client: 60.247.93.190, server: , request: "GET /cost?n=99999 HTTP/1.1", host: "k.co"
bingoohuang commented 2 years ago

编译 可移动、静态链接的 OpenResty

关键点:

  1. --prefix=. 设置成当前路径(相对路径)
  2. 依赖项, luajit, pcre, zlib, openssl,都使用源代码安装
OPENRESTY_VER=1.21.4.1
cd openresty-$OPENRESTY_VER
cd bundle/LuaJIT-* && make install -j PREFIX=`pwd` && LUAROOT=`pwd` && rm -rf lib/*.so* && cd ../..

./configure -j$(grep -c ^processor /proc/cpuinfo) \
    --prefix=. \
    --http-client-body-temp-path=tmp/client_body \
    --http-proxy-temp-path=tmp/proxy \
    --http-fastcgi-temp-path=tmp/fastcgi \
    --http-uwsgi-temp-path=tmp/uwsgi \
    --http-scgi-temp-path=tmp/scgi \
    --with-luajit=$LUAROOT \
    --with-pcre-jit \
    --with-pcre=../pcre-8.45 \
    --with-zlib=../zlib-1.2.12 \
    --with-openssl=../openssl-1.1.1o

make install -j
(mkdir -p target/openresty-$OPENRESTY_VER && cd target/openresty-$OPENRESTY_VER/ && mv ../../{nginx,lualib} .)

验证二进制 nginx 是否包括 zlib,ssh,pcre的动态依赖

$ cd target/openresty-1.21.4.1
$ nm nginx/sbin/nginx | egrep "zlib_inited|ssl_derive|pcre_exec"
00000000005f8b63 T pcre_exec
000000000062e630 T ssl_derive
0000000000b925e4 b zlib_inited
$ ldd nginx/sbin/nginx
    linux-vdso.so.1 (0x00007ffc70be6000)
    /$LIB/libonion.so => /lib64/libonion.so (0x00007f1452565000)
    libdl.so.2 => /lib64/libdl.so.2 (0x00007f145223e000)
    libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f145201e000)
    libcrypt.so.1 => /lib64/libcrypt.so.1 (0x00007f1451df5000)
    libm.so.6 => /lib64/libm.so.6 (0x00007f1451a73000)
    libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f145185b000)
    libc.so.6 => /lib64/libc.so.6 (0x00007f1451496000)
    /lib64/ld-linux-x86-64.so.2 (0x00007f1452442000)
$ nginx/sbin/nginx -V
nginx version: openresty/1.21.4.1
built by gcc 8.5.0 20210514 (Red Hat 8.5.0-4) (GCC)
built with OpenSSL 1.1.1o  3 May 2022
TLS SNI support enabled
configure arguments: --prefix=./nginx --with-cc-opt=-O2 --add-module=../ngx_devel_kit-0.3.1 --add-module=../echo-nginx-module-0.62 --add-module=../xss-nginx-module-0.06 --add-module=../ngx_coolkit-0.2 --add-module=../set-misc-nginx-module-0.33 --add-module=../form-input-nginx-module-0.12 --add-module=../encrypted-session-nginx-module-0.09 --add-module=../srcache-nginx-module-0.32 --add-module=../ngx_lua-0.10.21 --add-module=../ngx_lua_upstream-0.07 --add-module=../headers-more-nginx-module-0.33 --add-module=../array-var-nginx-module-0.05 --add-module=../memc-nginx-module-0.19 --add-module=../redis2-nginx-module-0.15 --add-module=../redis-nginx-module-0.3.9 --add-module=../rds-json-nginx-module-0.15 --add-module=../rds-csv-nginx-module-0.09 --add-module=../ngx_stream_lua-0.0.11 --with-ld-opt=-Wl,-rpath,/home/d5k/static-nginx/openresty-1.21.4.1/bundle/LuaJIT-2.1-20220411/lib --http-client-body-temp-path=tmp/client_body --http-proxy-temp-path=tmp/proxy --http-fastcgi-temp-path=tmp/fastcgi --http-uwsgi-temp-path=tmp/uwsgi --http-scgi-temp-path=tmp/scgi --with-pcre-jit --with-pcre=/home/d5k/static-nginx/openresty-1.21.4.1/../pcre-8.45 --with-zlib=/home/d5k/static-nginx/openresty-1.21.4.1/../zlib-1.2.12 --with-openssl=/home/d5k/static-nginx/openresty-1.21.4.1/../openssl-1.1.1o --with-openssl-opt=-g --with-pcre-opt=-g --with-zlib-opt=-g --with-stream --with-stream_ssl_module --with-stream_ssl_preread_module --with-http_ssl_module

验证移动目录,是否可以运行

添加 location

    location /lua {
        default_type text/html;
        content_by_lua '
            ngx.say("<p>hello, world</p>")
        ';
    }

移动目录,手工建立临时目录,添加上述脚本(同时改动监听端口号为 1235),启动 nginx

$ mv target/openresty-1.21.4.1/ ~/x1
$ cd ~/x1/openresty-1.21.4.1
$ mkdir -p ./nginx/tmp/client_body
$ nginx/sbin/nginx

调用测试

$ gurl :1231/lua -phb
HTTP/1.1 200 OK
Date: Tue, 02 Aug 2022 07:52:17 GMT
Content-Type: text/html
Connection: keep-alive
Server: openresty/1.21.4.1
Transfer-Encoding: chunked

<p>hello, world</p>

停止服务,再切换目录,再次验证,可以看到,切换不同目录以后,nginx 还是可以正常启动

$ nginx/sbin/nginx -s stop
$ mkdir ~/x2
$ mv * ~/x2
$ cd ~/x2
$ nginx/sbin/nginx
$ gurl :1231/lua -phb
HTTP/1.1 200 OK
Server: openresty/1.21.4.1
Date: Tue, 02 Aug 2022 07:53:36 GMT
Content-Type: text/html
Connection: keep-alive
Transfer-Encoding: chunked

<p>hello, world</p>

相关资料

bingoohuang commented 2 years ago

NGINX 配置 配置高性能、安全、稳定的NGINX服务器的最简单方法。