Kong / charts

Helm chart for Kong
Apache License 2.0
239 stars 473 forks source link

How to increase kong's timeout time #1024

Closed jy-kkkim closed 3 months ago

jy-kkkim commented 4 months ago

I want to make sure that when I call my service through kong, I don't get an error if the response takes more than 60 seconds. However, it seems that kong's default setting is a 60-second timeout. I configured kong with helm chart and modified the upstream_keepalive_idle_timeout item in env, but the timeout time does not increase.

I am attaching my kong helm chart below. Please help me...

admin:
  annotations:
    konghq.com/protocol: https
  enabled: true
  http:
    enabled: true
    servicePort: 8001
    containerPort: 8001
  ingress:
    annotations:
      konghq.com/https-redirect-status-code: "301"
      konghq.com/protocols: https
      konghq.com/strip-path: "true"
      kubernetes.io/ingress.class: default
      nginx.ingress.kubernetes.io/app-root: /
      nginx.ingress.kubernetes.io/backend-protocol: HTTPS
      nginx.ingress.kubernetes.io/permanent-redirect-code: "301"
    enabled: false
    hostname: kong.127-0-0-1.nip.io
    path: /api
    tls: quickstart-kong-admin-cert
  tls:
    containerPort: 8444
    enabled: false
    parameters:
    - http2
    servicePort: 8444
  type: NodePort
affinity: {}
certificates:
  enabled: false
  issuer: quickstart-kong-selfsigned-issuer
  cluster:
    enabled: true
  admin:
    enabled: true
    commonName: kong.127-0-0-1.nip.io
  portal:
    enabled: true
    commonName: developer.127-0-0-1.nip.io
  proxy:
    enabled: true
    commonName: 127-0-0-1.nip.io
    dnsNames:
    - '*.127-0-0-1.nip.io'
cluster:
  enabled: false
  labels:
    konghq.com/service: cluster
  tls:
    containerPort: 8005
    enabled: true
    servicePort: 8005
  type: ClusterIP
clustertelemetry:
  enabled: false
  tls:
    containerPort: 8006
    enabled: true
    servicePort: 8006
    type: ClusterIP
deployment:
  kong:
    daemonset: false
    enabled: true
enterprise:
  enabled: false
  license_secret: kong-enterprise-license
  portal:
    enabled: true
  rbac:
    admin_api_auth: basic-auth
    admin_gui_auth_conf_secret: kong-config-secret
    enabled: true
    session_conf_secret: kong-config-secret
  smtp:
    enabled: false
  vitals:
    enabled: true
env:
  database: "postgres"
  # the chart uses the traditional router (for Kong 3.x+) because the ingress
  # controller generates traditional routes. if you do not use the controller,
  # you may set this to "traditional_compatible" or "expression" to use the new
  # DSL-based router
  router_flavor: "traditional"
  nginx_worker_processes: "2"
  proxy_access_log: /dev/stdout
  admin_access_log: /dev/stdout
  admin_gui_access_log: /dev/stdout
  portal_api_access_log: /dev/stdout
  proxy_error_log: /dev/stderr
  admin_error_log: /dev/stderr
  admin_gui_error_log: /dev/stderr
  portal_api_error_log: /dev/stderr
  prefix: /kong_prefix/
  upstream_keepalive_idle_timeout: 300
image:
  repository: kong/kong-gateway
  tag: "3.3"
ingressController:
  enabled: true
  env:
    kong_admin_tls_skip_verify: true
  image:
    repository: docker.io/kong/kubernetes-ingress-controller
    tag: "2.7"
  ingressClass: default
  installCRDs: false
manager:
  enabled: false
  http:
    containerPort: 8002
    enabled: true
    servicePort: 8002
  ingress:
    annotations:
      konghq.com/https-redirect-status-code: "301"
      kubernetes.io/ingress.class: default
      nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    enabled: false
    hostname: kong.127-0-0-1.nip.io
    path: /
    tls: quickstart-kong-admin-cert
  tls:
    containerPort: 8445
    enabled: true
    parameters:
    - http2
    servicePort: 8445
  type: NodePort
migrations:
  enabled: true
  postUpgrade: true
  preUpgrade: true
namespace: kong
podAnnotations:
  kuma.io/gateway: enabled
portal:
  enabled: false
  http:
    containerPort: 8003
    enabled: false
    servicePort: 8003
  ingress:
    annotations:
      konghq.com/https-redirect-status-code: "301"
      konghq.com/protocols: https
      konghq.com/strip-path: "false"
      kubernetes.io/ingress.class: default
    enabled: false
    hostname: developer.127-0-0-1.nip.io
    path: /
    tls: quickstart-kong-portal-cert
  tls:
    containerPort: 8446
    enabled: false
    parameters:
    - http2
    servicePort: 8446
  type: NodePort
portalapi:
  enabled: false
  http:
    enabled: true
    servicePort: 8004
    containerPort: 8004
  ingress:
    annotations:
      konghq.com/https-redirect-status-code: "301"
      konghq.com/protocols: https
      konghq.com/strip-path: "true"
      kubernetes.io/ingress.class: default
      nginx.ingress.kubernetes.io/app-root: /
    enabled: false
    hostname: developer.127-0-0-1.nip.io
    path: /api
    tls: quickstart-kong-portal-cert
  tls:
    containerPort: 8447
    enabled: false
    parameters:
    - http2
    servicePort: 8447
  type: NodePort
postgresql:
  enabled: true
  auth:
    database: kong
    username: kong
proxy:
  annotations:
    prometheus.io/port: "9542"
    prometheus.io/scrape: "true"
  enabled: true
  http:
    containerPort: 8080
    enabled: true
    hostPort: 80
  ingress:
    enabled: false
  labels:
    enable-metrics: true
  tls:
    containerPort: 8443
    enabled: true
    hostPort: 443
  type: LoadBalancer

replicaCount: 1
secretVolumes: []
status:
  enabled: true
  http:
    containerPort: 8100
    enabled: true
  tls:
    containerPort: 8543
    enabled: false
nodeSelector:
  core-type: cpu
rainest commented 3 months ago

You'll want to apply https://docs.konghq.com/kubernetes-ingress-controller/latest/reference/annotations/#konghqcomreadtimeout to the Service in question.

The keepalive timeout is instead how long the proxy will maintain an open connection to an upstream service without any activity. It normally leaves connections open to services with the expectation that many environments will see frequent requests to the same service, so keeping connections open for future reuse after completing a request improves performance, since it won't need to redo the TCP and TLS handshakes.

I'll close this out, but if you have any further questions please respond back and we can reopen it.