Closed sluramod closed 1 month ago
Kong 3.6.1 has an old version of the module in which I think such a bug can occur. There's issues with local bazel builds of Kong 3.6.1 so I have not reproduced, but I have tried this filter and body payloads with Kong 3.7.1 and am not experiencing any issue so far; perhaps try with 3.7.1 which has a recent version of the module.
I've made some progress with 3.7.1. It now hangs in a more interesting way .
2024/07/18 04:11:40 [debug] 1341#0: *3217 [proxy-wasm] resetting filter chain: pwctx->exec_index 0 to 0 (pwctx: 000055C695420080)
2024-07-18T04:11:40.037260289Z 2024/07/18 04:11:40 [debug] 1341#0: *3217 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_response_headers" step in "header_filter" phase
2024-07-18T04:11:40.037282164Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_headers: 1 << while reading response header from upstream, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037288160Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_headers: 1 >> while reading response header from upstream, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037292684Z 2024/07/18 04:11:40 [debug] 1341#0: *3217 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_response_body" step in "body_filter" phase
2024-07-18T04:11:40.037295747Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_body: 1, end_of_stream: false << while sending to client, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037299438Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_body: 1, end_of_stream: false >> while sending to client, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037429528Z 2024/07/18 04:11:40 [debug] 1341#0: *3217 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_response_body" step in "body_filter" phase
2024-07-18T04:11:40.037449347Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_body: 1, end_of_stream: true << while sending to client, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037458824Z 2024/07/18 04:11:40 [info] 1341#0: *3217 [proxy-wasm]["log_filter" #1] on_http_response_body: 1, end_of_stream: true >> while sending to client, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037863663Z 172.28.0.4 - - [18/Jul/2024:04:11:40 +0000] "POST /openai/v1/chat/completions HTTP/1.1" 200 643 "-" "OpenAI/v1 PythonBindings/0.28.1" kong_request_id: "9136aa3acf281a6be388b63df0588422"
2024-07-18T04:11:40.037904155Z 2024/07/18 04:11:40 [debug] 1341#0: *3217 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_log" step in "log" phase
...
2024/07/18 04:11:46 [debug] 1344#0: *10313 [proxy-wasm]["log_filter" #1] filter 1/1 finalizing context
2024-07-18T04:11:46.523332712Z 2024/07/18 04:11:46 [debug] 1344#0: *10313 [proxy-wasm] "log_filter" filter freeing context #1 (1/1)
2024-07-18T04:11:46.524585892Z 2024/07/18 04:11:46 [info] 1344#0: *10313 client 172.28.0.4 closed keepalive connection
2024-07-18T04:11:47.571339041Z 2024/07/18 04:11:47 [debug] 1333#0: *10317 [proxy-wasm] "log_filter" filter reusing instance (ictx: 000055C696DB3680, store: 000055C6928974B0)
2024-07-18T04:11:47.571400906Z 2024/07/18 04:11:47 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] create_http_context_2: '2', client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:47.571417493Z 2024/07/18 04:11:47 [debug] 1333#0: *10317 [proxy-wasm]["log_filter" #2] filter 1/1 resuming "on_request_headers" step in "rewrite" phase
2024-07-18T04:11:47.571420356Z 2024/07/18 04:11:47 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] on_http_request_headers: 2 << , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:47.571424191Z 2024/07/18 04:11:47 [debug] 1333#0: *10317 [proxy-wasm]["log_filter" #2] filter 1/1 resuming "on_request_body" step in "content" phase
2024-07-18T04:11:47.571427036Z 2024/07/18 04:11:47 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] on_http_request_body: 2, end_of_stream: true <<< , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:47.571430574Z 2024/07/18 04:11:47 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] on_http_request_body: 2, end_of_stream: true >>> , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:47.571432842Z 2024/07/18 04:11:47 [debug] 1333#0: *10317 [lua] init.lua:1394: balancer(): setting address (try 1): 104.18.7.192:443
2024-07-18T04:11:47.571434754Z 2024/07/18 04:11:47 [debug] 1333#0: *10317 [lua] init.lua:1427: balancer(): enabled connection keepalive (pool=104.18.7.192|443|api.openai.com, pool_size=512, idle_timeout=60, max_requests=10000)
2024-07-18T04:11:48.262311346Z 2024/07/18 04:11:48 [debug] 1333#0: *10317 [proxy-wasm] resetting filter chain: pwctx->exec_index 0 to 0 (pwctx: 000055C6975D4660)
...
2024/07/18 04:11:51 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] on_log: 2 << while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.7.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:51.408863129Z 2024/07/18 04:11:51 [info] 1333#0: *10317 [proxy-wasm]["log_filter" #2] on_log: 2 >> while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.7.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "aec4d97f1c2faf28d6011f5e0c7bb767"
2024-07-18T04:11:51.408873778Z 2024/07/18 04:11:51 [debug] 1333#0: *10317 [proxy-wasm]["log_filter" #2] filter 1/1 resuming "on_done" step in "done" phase
2024-07-18T04:11:51.408877611Z 2024/07/18 04:11:51 [debug] 1333#0: *10317 [proxy-wasm]["log_filter" #2] filter 1/1 finalizing context
2024-07-18T04:11:51.408880992Z 2024/07/18 04:11:51 [debug] 1333#0: *10317 [proxy-wasm] "log_filter" filter freeing context #2 (1/1)
2024-07-18T04:11:51.445411155Z 2024/07/18 04:11:51 [info] 1333#0: *10317 client 172.28.0.4 closed keepalive connection
2024-07-18T04:11:59.484522173Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [proxy-wasm] "log_filter" filter reusing instance (ictx: 000055C696DB3680, store: 000055C6928974B0)
2024-07-18T04:11:59.484554146Z 2024/07/18 04:11:59 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] create_http_context_2: '1', client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:11:59.484558720Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_request_headers" step in "rewrite" phase
2024-07-18T04:11:59.484561424Z 2024/07/18 04:11:59 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] on_http_request_headers: 1 << , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:11:59.494048529Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_request_body" step in "content" phase
2024-07-18T04:11:59.494082687Z 2024/07/18 04:11:59 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] on_http_request_body: 1, end_of_stream: true <<< , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:11:59.494089117Z 2024/07/18 04:11:59 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] on_http_request_body: 1, end_of_stream: true >>> , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:11:59.494106412Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [lua] init.lua:1394: balancer(): setting address (try 1): 104.18.6.192:443
2024-07-18T04:11:59.494109383Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [lua] init.lua:1427: balancer(): enabled connection keepalive (pool=104.18.6.192|443|api.openai.com, pool_size=512, idle_timeout=60, max_requests=10000)
2024-07-18T04:11:59.936487796Z 2024/07/18 04:11:59 [debug] 1337#0: *10319 [proxy-wasm] resetting filter chain: pwctx->exec_index 0 to 0 (pwctx: 000055C69619E920)
...
2024/07/18 04:12:06 [debug] 1337#0: *10319 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_log" step in "log" phase
2024-07-18T04:12:06.245789662Z 2024/07/18 04:12:06 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] on_log: 1 << while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:12:06.245811697Z 2024/07/18 04:12:06 [info] 1337#0: *10319 [proxy-wasm]["log_filter" #1] on_log: 1 >> while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "8b829c26dadbf872dd6b052d85b6df67"
2024-07-18T04:12:06.245816854Z 2024/07/18 04:12:06 [debug] 1337#0: *10319 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_done" step in "done" phase
2024-07-18T04:12:06.245820449Z 2024/07/18 04:12:06 [debug] 1337#0: *10319 [proxy-wasm]["log_filter" #1] filter 1/1 finalizing context
2024-07-18T04:12:06.245824024Z 2024/07/18 04:12:06 [debug] 1337#0: *10319 [proxy-wasm] "log_filter" filter freeing context #1 (1/1)
2024-07-18T04:12:06.257153615Z 2024/07/18 04:12:06 [info] 1337#0: *10319 client 172.28.0.4 closed keepalive connection
2024-07-18T04:13:04.405819958Z 2024/07/18 04:13:04 [debug] 1333#0: *10323 [proxy-wasm] "log_filter" filter reusing instance (ictx: 000055C696DB3680, store: 000055C6928974B0)
2024-07-18T04:13:04.405883482Z 2024/07/18 04:13:04 [info] 1333#0: *10323 [proxy-wasm]["log_filter" #3] create_http_context_2: '3', client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "93b2942d6ed2ac3dc6392d1489dc5b0c"
2024-07-18T04:13:04.405893973Z 2024/07/18 04:13:04 [debug] 1333#0: *10323 [proxy-wasm]["log_filter" #3] filter 1/1 resuming "on_request_headers" step in "rewrite" phase
2024-07-18T04:13:04.405897683Z 2024/07/18 04:13:04 [info] 1333#0: *10323 [proxy-wasm]["log_filter" #3] on_http_request_headers: 3 << , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "93b2942d6ed2ac3dc6392d1489dc5b0c"
2024-07-18T04:13:04.407065904Z 2024/07/18 04:13:04 [debug] 1333#0: *10323 [proxy-wasm]["log_filter" #3] filter 1/1 resuming "on_request_body" step in "content" phase
2024-07-18T04:13:04.407087118Z 2024/07/18 04:13:04 [info] 1333#0: *10323 [proxy-wasm]["log_filter" #3] on_http_request_body: 3, end_of_stream: true <<< , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "93b2942d6ed2ac3dc6392d1489dc5b0c"
2024-07-18T04:13:04.407091919Z 2024/07/18 04:13:04 [info] 1333#0: *10323 [proxy-wasm]["log_filter" #3] on_http_request_body: 3, end_of_stream: true >>> , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "93b2942d6ed2ac3dc6392d1489dc5b0c"
Then Nothing related to wasm beyond the last line, for awhile, until it resumes on context #1 (context #3 still hangs):
2024-07-18T04:21:38.109153473Z 2024/07/18 04:21:38 [debug] 1337#0: *647 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069442387429013, runable_jobs_avg: 1.8559322033898, alive_threads_avg: 275.07796610169
2024-07-18T04:21:38.253142821Z 2024/07/18 04:21:38 [debug] 1336#0: *1291 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0065527453434291, runable_jobs_avg: 1.735593220339, alive_threads_avg: 275.07796610169
2024-07-18T04:21:38.472438985Z 2024/07/18 04:21:38 [debug] 1346#0: *3 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0064558592558772, runable_jobs_avg: 1.7152542372881, alive_threads_avg: 275.07796610169
2024-07-18T04:21:38.558231920Z 2024/07/18 04:21:38 [debug] 1344#0: *1933 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0078774787173895, runable_jobs_avg: 2.0757575757576, alive_threads_avg: 274.94949494949
2024-07-18T04:21:38.575622120Z 2024/07/18 04:21:38 [debug] 1341#0: *3218 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0068205853310109, runable_jobs_avg: 1.8237288135593, alive_threads_avg: 275.07796610169
2024-07-18T04:21:38.595310419Z 2024/07/18 04:21:38 [debug] 1335#0: *2575 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0068449955395552, runable_jobs_avg: 1.8274111675127, alive_threads_avg: 275.04568527919
2024-07-18T04:21:38.901230404Z 2024/07/18 04:21:38 [debug] 1333#0: *4509 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0082622850492273, runable_jobs_avg: 2.1925675675676, alive_threads_avg: 275.01351351351
2024-07-18T04:21:38.998211332Z 2024/07/18 04:21:38 [debug] 1342#0: *5152 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0073559626727898, runable_jobs_avg: 1.9610829103215, alive_threads_avg: 275.04568527919
2024-07-18T04:21:39.039230953Z 2024/07/18 04:21:39 [debug] 1332#0: *6446 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069264156765128, runable_jobs_avg: 1.8508474576271, alive_threads_avg: 275.07796610169
2024-07-18T04:21:39.100532217Z 2024/07/18 04:21:39 [debug] 1339#0: *5794 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0068122863383369, runable_jobs_avg: 1.8220338983051, alive_threads_avg: 275.07796610169
2024-07-18T04:21:39.153580692Z 2024/07/18 04:21:39 [debug] 1345#0: *3865 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0067141950492858, runable_jobs_avg: 1.7932203389831, alive_threads_avg: 275.07796610169
2024-07-18T04:21:39.316983112Z 2024/07/18 04:21:39 [debug] 1343#0: *7097 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069908744975868, runable_jobs_avg: 1.8641765704584, alive_threads_avg: 275.1103565365
2024-07-18T04:21:39.423484769Z 2024/07/18 04:21:39 [debug] 1338#0: *7092 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069724047950485, runable_jobs_avg: 1.8593220338983, alive_threads_avg: 275.07796610169
2024-07-18T04:21:39.843343287Z 2024/07/18 04:21:39 [debug] 1334#0: *8378 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0068271042358349, runable_jobs_avg: 1.8254237288136, alive_threads_avg: 275.07796610169
2024-07-18T04:21:39.849188066Z 2024/07/18 04:21:39 [debug] 1347#0: *9022 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0069341689155306, runable_jobs_avg: 1.8494077834179, alive_threads_avg: 275.04568527919
2024-07-18T04:21:40.936324624Z 2024/07/18 04:21:40 [debug] 1340#0: *9666 [lua] super.lua:161: scaling_log(): [timer-ng] load_avg: 0.0077561348723302, runable_jobs_avg: 2.0420875420875, alive_threads_avg: 274.94949494949
2024-07-18T04:23:08.761358130Z 2024/07/18 04:23:08 [debug] 1343#0: *10341 [proxy-wasm] "log_filter" filter reusing instance (ictx: 000055C697715E50, store: 000055C6928974B0)
2024-07-18T04:23:08.761389964Z 2024/07/18 04:23:08 [info] 1343#0: *10341 [proxy-wasm]["log_filter" #1] create_http_context_2: '1', client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "bb517f237a4cfd25b3649704c720653e"
2024-07-18T04:23:08.761395346Z 2024/07/18 04:23:08 [debug] 1343#0: *10341 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_request_headers" step in "rewrite" phase
2024-07-18T04:23:08.761398343Z 2024/07/18 04:23:08 [info] 1343#0: *10341 [proxy-wasm]["log_filter" #1] on_http_request_headers: 1 << , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "bb517f237a4cfd25b3649704c720653e"
2024-07-18T04:23:08.876612333Z 2024/07/18 04:23:08 [debug] 1343#0: *10341 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_request_body" step in "content" phase
...
Then it eventually hangs on context #2:
2024/07/18 04:23:31 [info] 1347#0: *10363 [proxy-wasm]["log_filter" #1] on_log: 1 << while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "72ee7c2888c28b78b047f359377b5f79"
2024-07-18T04:23:31.467059789Z 2024/07/18 04:23:31 [info] 1347#0: *10363 [proxy-wasm]["log_filter" #1] on_log: 1 >> while logging request, client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", upstream: "https://104.18.6.192:443/v1/chat/completions", host: "kong-wasm:8000", request_id: "72ee7c2888c28b78b047f359377b5f79"
2024-07-18T04:23:31.467063487Z 2024/07/18 04:23:31 [debug] 1347#0: *10363 [proxy-wasm]["log_filter" #1] filter 1/1 resuming "on_done" step in "done" phase
2024-07-18T04:23:31.467066370Z 2024/07/18 04:23:31 [debug] 1347#0: *10363 [proxy-wasm]["log_filter" #1] filter 1/1 finalizing context
2024-07-18T04:23:31.467069181Z 2024/07/18 04:23:31 [debug] 1347#0: *10363 [proxy-wasm] "log_filter" filter freeing context #1 (1/1)
2024-07-18T04:23:31.467081658Z 172.28.0.4 - - [18/Jul/2024:04:23:31 +0000] "POST /openai/v1/chat/completions HTTP/1.1" 200 118564 "-" "OpenAI/v1 PythonBindings/0.28.1" kong_request_id: "72ee7c2888c28b78b047f359377b5f79"
2024-07-18T04:23:31.477568781Z 2024/07/18 04:23:31 [info] 1347#0: *10363 client 172.28.0.4 closed keepalive connection
2024-07-18T04:23:31.619426844Z 2024/07/18 04:23:31 [debug] 1341#0: *10367 [proxy-wasm] "log_filter" filter reusing instance (ictx: 000055C697715E50, store: 000055C6928974B0)
2024-07-18T04:23:31.619455832Z 2024/07/18 04:23:31 [info] 1341#0: *10367 [proxy-wasm]["log_filter" #2] create_http_context_2: '2', client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "e12ef249f49fef61b924ac637d7a905a"
2024-07-18T04:23:31.619460842Z 2024/07/18 04:23:31 [debug] 1341#0: *10367 [proxy-wasm]["log_filter" #2] filter 1/1 resuming "on_request_headers" step in "rewrite" phase
2024-07-18T04:23:31.619464145Z 2024/07/18 04:23:31 [info] 1341#0: *10367 [proxy-wasm]["log_filter" #2] on_http_request_headers: 2 << , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "e12ef249f49fef61b924ac637d7a905a"
2024-07-18T04:23:31.621797678Z 2024/07/18 04:23:31 [debug] 1341#0: *10367 [proxy-wasm]["log_filter" #2] filter 1/1 resuming "on_request_body" step in "content" phase
2024-07-18T04:23:31.621825808Z 2024/07/18 04:23:31 [info] 1341#0: *10367 [proxy-wasm]["log_filter" #2] on_http_request_body: 2, end_of_stream: true <<< , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "e12ef249f49fef61b924ac637d7a905a"
2024-07-18T04:23:31.621831175Z 2024/07/18 04:23:31 [info] 1341#0: *10367 [proxy-wasm]["log_filter" #2] on_http_request_body: 2, end_of_stream: true >>> , client: 172.28.0.4, server: kong, request: "POST /openai/v1/chat/completions HTTP/1.1", host: "kong-wasm:8000", request_id: "e12ef249f49fef61b924ac637d7a905a"
Btw, I'm using this project on github for testing: reworkd/AgentGPT
Could we see the full logs? It is worth noting that each worker has its own context_id counter, so context #1 in pid 1341 is not the same as context #1 in pid 1347. Mostly I wonder what happens on connection *10323
after the last callback invocation so the full logs may be insightful.
Unfortunately, today 3.7.1 is behaving exactly like 3.6.1 before. I've attached a few sample runs. proxy_wasm_traces.txt
Are there no other logs after the last line of this file? What if you send another request while it hangs? Are there more logs that appear?
Nothing after that. FYI, in the attached file you can see several separate traces, they are separated by multiple new lines. I restarted kong to get each trace. Also FYI, kong is running in docker.
So far I have not been able to reproduce this in either of the Kong 3.6.1 or 3.7.1 images. I've been running the above filter in those docker images and running wrk
with request bodies and have not encountered any issue. Since I don't have an OpenAI key (no phone number to give them), I tested with another upstream (https://httpbin.org
). Could you try with another upstream as well and see if the problem persists?
My goal was to test with a real application connected to openai api, I don't have a test suite ready to verify with httpbin.
However, I did deploy exactly the same plugin to envoy docker container, configured the app to proxy requests through envoy container instead of kong container and made no other changes. The exact same plugin works in envoy without hanging up. Then I stopped envoy container, started kong container, and kong with wasm plugin stopped responding after few request/responses.
I will try with another upstream later.
Perhaps openai service definition in kong requires some special configuration? I shared the config I'm using above.
Nothing seems wrong with the configuration. Could we also see the request the client is making? Headers and body - so we can try to spot anything that could trigger this.
Unable to reproduce and lack of activity.
I've build a very simple plugin for Kong 3.6.1:
This plugin hangs after a new instance of this context is created. Relevant logs:
After the last
on_http_request_body
the plugin 'hangs' and receives no more callbacks.kong config file: