Kong / kubernetes-ingress-controller

:gorilla: Kong for Kubernetes: The official Ingress Controller for Kubernetes.
https://docs.konghq.com/kubernetes-ingress-controller/
Apache License 2.0
2.22k stars 592 forks source link

Basic authentication doesn't work on KONG in kubernetes cluster #808

Closed SiarheiBortnik closed 4 years ago

SiarheiBortnik commented 4 years ago

Summary

We use KONG in our kubernetes cluster and deploy it with help of helm. We need to use basic authentication on our prometheus ingress resource. Kong controller deployed in dev2 namespace, prometheus deployed in monitoring namespace.

Kong Ingress controller version kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1

Kong or Kong Enterprise version kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition:2.1.0.0-beta1-alpine

Kubernetes version

1.15.6 Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-13T17:40:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.11", GitCommit:"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede", GitTreeState:"clean", BuildDate:"2020-03-13T17:40:34Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

Environment

What happened

We have ingress resource:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-prod
    konghq.com/plugins: basic-auth
    kubernetes.io/ingress.class: kong-dev2
  generation: 1
  labels:
    app: prometheus-operator-prometheus
    app.kubernetes.io/managed-by: Helm
    chart: prometheus-operator-7.4.0
    heritage: Helm
    release: prometheus-operator
  name: prometheus-operator-prometheus
  namespace: monitoring
spec:
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: prometheus-operator-prometheus
          servicePort: 9090
        path: /prometheus/
  tls:
  - hosts:
    - example.com
    secretName: letsencrypt-secret
status:
  loadBalancer:
    ingress:
    - ip: 0.0.0.0

And resources needed for basic authentication:

echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: basic-auth
  namespace: monitoring
consumerRef: basic-auth
config:
  hide_credentials: true
plugin: basic-auth
" | kubectl create -f -

echo "
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: basic-auth
  namespace: monitoring
username: user
credentials:
- prometheus-secret
" | kubectl create -f -

echo '
apiVersion: v1
kind: Secret
metadata:
  name: prometheus-secret
  namespace: monitoring
stringData:
  kongCredType: basic-auth
  username: user
data:
  password: pass' | kubectl create -f -`

Also I tried to use old way without secret and without section "credentials:" in "KongConsumer", using:

echo "
apiVersion: configuration.konghq.com/v1
kind: KongCredential
metadata:
  name: basic-auth
  namespace: dev2
consumerRef: basic-auth
type: basic-auth
config:
  username: user
  password: pass
" | kubectl create -f -

but result was the same. I have an error after login, using username "user" and password "pass" - {"message":"Invalid authentication credentials"}

Expected behavior

Login using username "user" and password "pass"

hbagdi commented 4 years ago

Here are the commands I execute and things work fine:

$ kubectl apply -f https://bit.ly/k8s-httpbin                                                                                                          

$ echo '
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo
  annotations:
    konghq.com/strip-path: "true"
spec:
  rules:
  - http:
      paths:
      - path: /foo
        backend:
          serviceName: httpbin
          servicePort: 80
' | kubectl apply -f -

➜  kic git:(master) http :8000                                                                                                                                           <<<
HTTP/1.1 404 Not Found
Connection: keep-alive
Content-Length: 48
Content-Type: application/json; charset=utf-8
Date: Wed, 26 Aug 2020 17:25:52 GMT
Server: kong/2.0.4
X-Kong-Response-Latency: 1

{
    "message": "no Route matched with those values"
}

➜  kic git:(master) export PROXY_IP=localhost:8000                                                                                                                       
➜  kic git:(master) curl -i $PROXY_IP/foo/status/200                                                                                                                     
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Date: Wed, 26 Aug 2020 17:26:10 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
X-Kong-Upstream-Latency: 5
X-Kong-Proxy-Latency: 26
Via: kong/2.0.4

➜  kic git:(master) echo "
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: httpbin-auth
plugin: basic-auth
" | kubectl apply -f -

kongplugin.configuration.konghq.com/httpbin-auth created

➜  kic git:(master) echo '
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: demo
  annotations:
    konghq.com/strip-path: "true"
    konghq.com/plugins: httpbin-auth
spec:
  rules:
  - http:
      paths:
      - path: /foo
        backend:
          serviceName: httpbin
          servicePort: 80
' | kubectl apply -f -
ingress.extensions/demo configured
➜  kic git:(master) curl -i $PROXY_IP/foo/status/200
HTTP/1.1 401 Unauthorized
Date: Wed, 26 Aug 2020 17:26:42 GMT
Content-Type: application/json; charset=utf-8
Connection: keep-alive
WWW-Authenticate: Basic realm="kong"
Content-Length: 26
X-Kong-Response-Latency: 0
Server: kong/2.0.4

{"message":"Unauthorized"}%                                                                                                                                                  ➜  kic git:(master) echo "apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: harry
username: harry" | kubectl apply -f -

kongconsumer.configuration.konghq.com/harry created

➜  kic git:(master) kubectl create secret generic harry-basicauth  \                                                                                                  
  --from-literal=kongCredType=basic-auth  \
  --from-literal=username=harry \
  --from-literal=password=foo
secret/harry-basicauth created

➜  kic git:(master) echo "apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: harry
username: harry
credentials:
- harry-basicauth" | kubectl apply -f -

kongconsumer.configuration.konghq.com/harry configured

➜  kic git:(master) curl -i -u 'harry:foo' $PROXY_IP/foo/status/200
HTTP/1.1 200 OK
Content-Type: text/html; charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: gunicorn/19.9.0
Date: Wed, 26 Aug 2020 17:28:36 GMT
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
X-Kong-Upstream-Latency: 4
X-Kong-Proxy-Latency: 1
Via: kong/2.0.4  
SiarheiBortnik commented 4 years ago

We updated kong with new image. Currently we use: kong-docker-kong-enterprise-edition-docker.bintray.io/kong-enterprise-edition:2.1.3.0-alpine I tried to applied it on different resource in one namespace:

echo '
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  name: httpbin-auth
  namespace: kong
plugin: basic-auth
' | kubectl create -f -

Then add annotation to my ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-prod
    konghq.com/plugins: httpbin-auth
    kubernetes.io/ingress.class: kong-kong
    meta.helm.sh/release-name: vault-kong
    meta.helm.sh/release-namespace: kong
  labels:
....
echo '
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: user
  namespace: kong
username: user
' | kubectl create -f -

kubectl create secret generic user-basicauth -n kong --from-literal=kongCredType=basic-auth --from-literal=username=user --from-literal=password=foo

echo '
apiVersion: configuration.konghq.com/v1
kind: KongConsumer
metadata:
  name: user
  namespace: kong
username: user
credentials:
- user-basicauth' | kubectl apply -f -

And after check:

curl -i -u 'user:foo' https://my-ingress-url-path/
HTTP/2 401
date: Thu, 27 Aug 2020 08:02:46 GMT
content-type: application/json; charset=utf-8
content-length: 48
vary: Origin
access-control-allow-origin: *
x-kong-response-latency: 1
server: kong/2.1.3.0-enterprise-edition
hbagdi commented 4 years ago

Do you see anything odd in the controller logs?

Please ensure that you are creating the Ingress and KongPlugin resource in the same namespace.

SiarheiBortnik commented 4 years ago

This is proxy logs during time when I Try to login: 2020/08/28 06:40:58 [notice] 25#0: *333852 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/28 06:40:58 [notice] 25#0: *333852 [lua] cache.lua:379: purge(): [DB cache] purging (local) cache, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/28 06:40:58 [notice] 25#0: *333852 [lua] cache.lua:379: purge(): [DB cache] purging (local) cache, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 127.0.0.1 - - [28/Aug/2020:06:40:58 +0000] "POST /config?check_hash=1 HTTP/2.0" 201 218400 "-" "Go-http-client/2.0" 2020/08/28 06:41:00 [error] 25#0: *334114 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/vitals/init.lua:917: attempt to call a nil value stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/vitals/init.lua: in function 'flush_vitals_cache' /usr/local/share/lua/5.1/kong/vitals/init.lua:995: in function 'flush_counters' /usr/local/share/lua/5.1/kong/vitals/init.lua:390: in function </usr/local/share/lua/5.1/kong/vitals/init.lua:361>, context: ngx.timer 10.240.3.83 - - [28/Aug/2020:06:41:05 +0000] "GET /v1/sys/health?standbycode=200&sealedcode=200&uninitcode=200&drsecondarycode=200&performancestandbycode=200 HTTP/2.0" 401 26 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui/vault/auth?with=token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.3.83 - - [28/Aug/2020:06:41:05 +0000] "GET /v1/sys/seal-status HTTP/2.0" 401 26 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui/vault/auth?with=token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 2020/08/28 06:41:10 [error] 25#0: *335040 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/vitals/init.lua:917: attempt to call a nil value stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/vitals/init.lua: in function 'flush_vitals_cache' /usr/local/share/lua/5.1/kong/vitals/init.lua:995: in function 'flush_counters' /usr/local/share/lua/5.1/kong/vitals/init.lua:390: in function </usr/local/share/lua/5.1/kong/vitals/init.lua:361>, context: ngx.timer 10.240.3.83 - user [28/Aug/2020:06:41:11 +0000] "GET /v1/sys/seal-status HTTP/2.0" 401 48 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui/vault/auth?with=token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.3.83 - user [28/Aug/2020:06:41:11 +0000] "GET /v1/sys/health?standbycode=200&sealedcode=200&uninitcode=200&drsecondarycode=200&performancestandbycode=200 HTTP/2.0" 401 48 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui/vault/auth?with=token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.3.83 - - [28/Aug/2020:06:41:14 +0000] "GET /ui/vault/auth?with=token HTTP/2.0" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.3.83 - user [28/Aug/2020:06:41:19 +0000] "GET /ui/vault/auth?with=token HTTP/2.0" 401 48 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.3.83 - - [28/Aug/2020:06:41:19 +0000] "GET /favicon.ico HTTP/2.0" 401 26 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui/vault/auth?with=token" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 2020/08/28 06:41:20 [error] 25#0: *335970 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/vitals/init.lua:917: attempt to call a nil value stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/vitals/init.lua: in function 'flush_vitals_cache' /usr/local/share/lua/5.1/kong/vitals/init.lua:995: in function 'flush_counters' /usr/local/share/lua/5.1/kong/vitals/init.lua:390: in function </usr/local/share/lua/5.1/kong/vitals/init.lua:361>, context: ngx.timer This is ingress-controller logs, last message shows us time when I configured annotation on ingress controller: I0828 06:33:58.710195 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:39:01.903400 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:39:17.911470 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:39:21.255444 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:39:24.614348 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:39:27.929064 1 kong.go:68] no configuration change, skipping sync to Kong I0828 06:40:58.330968 1 kong.go:81] successfully synced configuration to Kong All resources, KongPlugin, KongConsumer and secret are in the same namespace.

hbagdi commented 4 years ago

Can you remove KongConsumer and Secret and start over?

SiarheiBortnik commented 4 years ago

I did next steps:

  1. KongConsumer deleted
  2. Secret deleted
  3. Add annotation to ingress
  4. Tried to login without entering passwod and login Proxy logs: 127.0.0.1 - - [29/Aug/2020:08:38:11 +0000] "POST /config?check_hash=1 HTTP/2.0" 201 218438 "-" "Go-http-client/2.0" 2020/08/29 09:18:17 [warn] 25#0: *332508 [kong] plugins.lua:126 DEPRECATED: /plugins/schema/:name endpoint is deprecated, please use /schemas/plugins/:name instead., client: 127.0.0.1, server: kong_admin, request: "GET /plugins/schema/basic-auth HTTP/2.0", host: "localhost:8444" 127.0.0.1 - - [29/Aug/2020:09:18:17 +0000] "GET /plugins/schema/basic-auth HTTP/2.0" 200 100 "-" "Go-http-client/2.0" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:217: discover(): [openid-connect] loading configuration for https://kong-gsc-ingress.northeurope.cloudapp.azure.com/authentication/.well-known/openid-configuration using discovery, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 10.240.2.218 - - [29/Aug/2020:09:18:18 +0000] "GET /authentication/.well-known/openid-configuration HTTP/1.1" 200 2202 "-" "lua-resty-http/0.14 (Lua) ngx_lua/10015" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:269: discover(): [openid-connect] loading jwks from https://kong-gsc-ingress.northeurope.cloudapp.azure.com/authentication/.well-known/openid-configuration/jwks, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 10.240.2.218 - - [29/Aug/2020:09:18:18 +0000] "GET /authentication/.well-known/openid-configuration/jwks HTTP/1.1" 200 453 "-" "lua-resty-http/0.14 (Lua) ngx_lua/10015" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:418: rediscover(): [openid-connect] openid connect rediscovery was done recently (30 seconds until next rediscovery), client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:379: purge(): [DB cache] purging (local) cache, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 2020/08/29 09:18:18 [notice] 25#0: *332508 [lua] cache.lua:379: purge(): [DB cache] purging (local) cache, client: 127.0.0.1, server: kong_admin, request: "POST /config?check_hash=1 HTTP/2.0", host: "localhost:8444" 127.0.0.1 - - [29/Aug/2020:09:18:18 +0000] "POST /config?check_hash=1 HTTP/2.0" 201 218740 "-" "Go-http-client/2.0" 2020/08/29 09:18:20 [error] 25#0: *332722 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/vitals/init.lua:917: attempt to call a nil value stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/vitals/init.lua: in function 'flush_vitals_cache' /usr/local/share/lua/5.1/kong/vitals/init.lua:995: in function 'flush_counters' /usr/local/share/lua/5.1/kong/vitals/init.lua:390: in function </usr/local/share/lua/5.1/kong/vitals/init.lua:361>, context: ngx.timer 10.240.2.218 - - [29/Aug/2020:09:18:24 +0000] "GET / HTTP/2.0" 401 26 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 2020/08/29 09:18:28 [error] 25#0: *332966 [kong] access.lua:53 [basic-auth] header has unrecognized format, client: 10.240.2.218, server: kong, request: "GET / HTTP/2.0", host: "kong-gsc-ingress.northeurope.cloudapp.azure.com" 10.240.2.218 - - [29/Aug/2020:09:18:28 +0000] "GET / HTTP/2.0" 401 48 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 10.240.2.218 - - [29/Aug/2020:09:18:28 +0000] "GET /favicon.ico HTTP/2.0" 401 26 "https://kong-gsc-ingress.northeurope.cloudapp.azure.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.135 Safari/537.36" 2020/08/29 09:18:30 [error] 25#0: *333219 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/kong/vitals/init.lua:917: attempt to call a nil value stack traceback: coroutine 0: /usr/local/share/lua/5.1/kong/vitals/init.lua: in function 'flush_vitals_cache' /usr/local/share/lua/5.1/kong/vitals/init.lua:995: in function 'flush_counters' /usr/local/share/lua/5.1/kong/vitals/init.lua:390: in function </usr/local/share/lua/5.1/kong/vitals/init.lua:361>, context: ngx.timer Ingress controller logs: I0829 09:10:34.595512 1 kong.go:68] no configuration change, skipping sync to Kong I0829 09:17:19.694723 1 kong.go:68] no configuration change, skipping sync to Kong I0829 09:17:22.990475 1 kong.go:68] no configuration change, skipping sync to Kong I0829 09:18:18.272631 1 kong.go:81] successfully synced configuration to Kong
hbagdi commented 4 years ago

Would it be possible for you start fresh in a local or dev environment and see if the issue persists? There is nothing indicating that there is a problem with the controller.

morals415 commented 4 years ago

@SiarheiBortnik before we go to the next steps can you do as Harry suggests and start a fresh deployment from scratch to see if the issue persists or not? If it does I will work to get us on a call but would first like to have your next update here showing the results from a local or development environment. Thanks.

Jorgevillada commented 4 years ago

same problem. could this be https://github.com/Kong/kong/issues/4542#issuecomment-486828860 or this https://github.com/Kong/kubernetes-ingress-controller/issues/865?

with 1.9.1 works fine.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kong-kong
  namespace: gateway
  selfLink: /apis/apps/v1/namespaces/gateway/deployments/kong-kong
  uid: 8ea5bc8f-f1a5-45ed-a8bb-2a292d39c272
  resourceVersion: '258488'
  generation: 4
  creationTimestamp: '2020-09-24T01:23:12Z'
  labels:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: '2'
    env: prod
    helm.sh/chart: kong-1.9.1
  annotations:
    deployment.kubernetes.io/revision: '4'
    kuma.io/gateway: enabled
    meta.helm.sh/release-name: kong
    meta.helm.sh/release-namespace: gateway
    traffic.sidecar.istio.io/includeInboundPorts: ''
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: app
      app.kubernetes.io/instance: kong
      app.kubernetes.io/name: kong
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: app
        app.kubernetes.io/instance: kong
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: kong
        app.kubernetes.io/version: '2'
        env: prod
        helm.sh/chart: kong-1.9.1
      annotations:
        prometheus.io/port: '9542'
        prometheus.io/scrape: 'true'
    spec:
      volumes:
        - name: kong-kong-prefix-dir
          emptyDir: {}
        - name: kong-kong-tmp
          emptyDir: {}
        - name: kong-kong-bash-wait-for-postgres
          configMap:
            name: kong-kong-bash-wait-for-postgres
            defaultMode: 493
        - name: custom-nginx-template-volume
          configMap:
            name: kong-kong-default-custom-server-blocks
            defaultMode: 420
      containers:
        - name: ingress-controller
          image: >-
            kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1
          args:
            - /kong-ingress-controller
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: CONTROLLER_ELECTION_ID
              value: kong-ingress-controller-leader-kong
            - name: CONTROLLER_INGRESS_CLASS
              value: kong
            - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
              value: 'true'
            - name: CONTROLLER_KONG_URL
              value: 'https://localhost:8444'
            - name: CONTROLLER_PUBLISH_SERVICE
              value: gateway/kong-kong-proxy
          resources: {}
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
        - name: proxy
          image: 'kong:2.1'
          ports:
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-tls
              containerPort: 8443
              protocol: TCP
            - name: metrics
              containerPort: 9542
              protocol: TCP
          env:
            - name: KONG_ADMIN_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_GUI_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_GUI_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_LISTEN
              value: '127.0.0.1:8444 http2 ssl'
            - name: KONG_CLUSTER_LISTEN
              value: 'off'
            - name: KONG_DATABASE
              value: 'off'
            - name: KONG_KIC
              value: 'on'
            - name: KONG_LUA_PACKAGE_PATH
              value: /opt/?.lua;/opt/?/init.lua;;
            - name: KONG_NGINX_HTTP_INCLUDE
              value: /kong/servers.conf
            - name: KONG_NGINX_WORKER_PROCESSES
              value: '1'
            - name: KONG_PLUGINS
              value: bundled
            - name: KONG_PORTAL_API_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PORTAL_API_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PORT_MAPS
              value: '80:8000, 443:8443'
            - name: KONG_PREFIX
              value: /kong_prefix/
            - name: KONG_PROXY_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PROXY_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PROXY_LISTEN
              value: '0.0.0.0:8000, 0.0.0.0:8443 http2 ssl'
            - name: KONG_STATUS_LISTEN
              value: '0.0.0.0:8100'
            - name: KONG_STREAM_LISTEN
              value: 'off'
            - name: KONG_NGINX_DAEMON
              value: 'off'
            - name: KONG_TRUSTED_IPS
              value: '0.0.0.0/0,::/0'
          resources: {}
          volumeMounts:
            - name: kong-kong-prefix-dir
              mountPath: /kong_prefix/
            - name: kong-kong-tmp
              mountPath: /tmp
            - name: custom-nginx-template-volume
              mountPath: /kong
          livenessProbe:
            httpGet:
              path: /status
              port: metrics
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /status
              port: metrics
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - /bin/sleep 15 && kong quit
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeSelector:
        part-of: system
      serviceAccountName: kong-kong
      serviceAccount: kong-kong
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 4
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Available
      status: 'True'
      lastUpdateTime: '2020-09-24T01:23:47Z'
      lastTransitionTime: '2020-09-24T01:23:47Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-09-24T01:37:16Z'
      lastTransitionTime: '2020-09-24T01:23:12Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "kong-kong-7b74c5c87c" has successfully progressed.

with 1.10.0 no.

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kong-kong
  namespace: gateway
  selfLink: /apis/apps/v1/namespaces/gateway/deployments/kong-kong
  uid: aaf30c74-4d73-4f39-94d9-ca0dd809f389
  resourceVersion: '239230'
  generation: 8
  creationTimestamp: '2020-09-23T21:39:16Z'
  labels:
    app.kubernetes.io/component: app
    app.kubernetes.io/instance: kong
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: kong
    app.kubernetes.io/version: '2'
    env: prod
    helm.sh/chart: kong-1.10.0
  annotations:
    deployment.kubernetes.io/revision: '4'
    kuma.io/gateway: enabled
    meta.helm.sh/release-name: kong
    meta.helm.sh/release-namespace: gateway
    traffic.sidecar.istio.io/includeInboundPorts: ''
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/component: app
      app.kubernetes.io/instance: kong
      app.kubernetes.io/name: kong
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/component: app
        app.kubernetes.io/instance: kong
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: kong
        app.kubernetes.io/version: '2'
        env: prod
        helm.sh/chart: kong-1.10.0
      annotations:
        prometheus.io/port: '9542'
        prometheus.io/scrape: 'true'
    spec:
      volumes:
        - name: kong-kong-prefix-dir
          emptyDir: {}
        - name: kong-kong-tmp
          emptyDir: {}
        - name: kong-kong-bash-wait-for-postgres
          configMap:
            name: kong-kong-bash-wait-for-postgres
            defaultMode: 493
        - name: custom-nginx-template-volume
          configMap:
            name: kong-kong-default-custom-server-blocks
            defaultMode: 420
      containers:
        - name: ingress-controller
          image: >-
            kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.10.0
          args:
            - /kong-ingress-controller
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
            - name: CONTROLLER_ELECTION_ID
              value: kong-ingress-controller-leader-kong
            - name: CONTROLLER_INGRESS_CLASS
              value: kong
            - name: CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY
              value: 'true'
            - name: CONTROLLER_KONG_URL
              value: 'https://localhost:8444'
            - name: CONTROLLER_PUBLISH_SERVICE
              value: gateway/kong-kong-proxy
          resources: {}
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
        - name: proxy
          image: 'kong:2.1'
          ports:
            - name: proxy
              containerPort: 8000
              protocol: TCP
            - name: proxy-tls
              containerPort: 8443
              protocol: TCP
            - name: metrics
              containerPort: 9542
              protocol: TCP
          env:
            - name: KONG_ADMIN_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_GUI_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_ADMIN_GUI_ERROR_LOG
              value: /dev/stderr
            - name: KONG_ADMIN_LISTEN
              value: '127.0.0.1:8444 http2 ssl'
            - name: KONG_CLUSTER_LISTEN
              value: 'off'
            - name: KONG_DATABASE
              value: 'off'
            - name: KONG_KIC
              value: 'on'
            - name: KONG_LUA_PACKAGE_PATH
              value: /opt/?.lua;/opt/?/init.lua;;
            - name: KONG_NGINX_HTTP_INCLUDE
              value: /kong/servers.conf
            - name: KONG_NGINX_WORKER_PROCESSES
              value: '1'
            - name: KONG_PLUGINS
              value: bundled
            - name: KONG_PORTAL_API_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PORTAL_API_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PORT_MAPS
              value: '80:8000, 443:8443'
            - name: KONG_PREFIX
              value: /kong_prefix/
            - name: KONG_PROXY_ACCESS_LOG
              value: /dev/stdout
            - name: KONG_PROXY_ERROR_LOG
              value: /dev/stderr
            - name: KONG_PROXY_LISTEN
              value: '0.0.0.0:8000, 0.0.0.0:8443 http2 ssl'
            - name: KONG_STATUS_LISTEN
              value: '0.0.0.0:8100'
            - name: KONG_STREAM_LISTEN
              value: 'off'
            - name: KONG_NGINX_DAEMON
              value: 'off'
            - name: KONG_TRUSTED_IPS
              value: '0.0.0.0/0,::/0'
          resources: {}
          volumeMounts:
            - name: kong-kong-prefix-dir
              mountPath: /kong_prefix/
            - name: kong-kong-tmp
              mountPath: /tmp
            - name: custom-nginx-template-volume
              mountPath: /kong
          livenessProbe:
            httpGet:
              path: /status
              port: metrics
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /status
              port: metrics
              scheme: HTTP
            initialDelaySeconds: 5
            timeoutSeconds: 5
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          lifecycle:
            preStop:
              exec:
                command:
                  - /bin/sh
                  - '-c'
                  - /bin/sleep 15 && kong quit
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      nodeSelector:
        part-of: system
      serviceAccountName: kong-kong
      serviceAccount: kong-kong
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
status:
  observedGeneration: 8
  replicas: 1
  updatedReplicas: 1
  readyReplicas: 1
  availableReplicas: 1
  conditions:
    - type: Progressing
      status: 'True'
      lastUpdateTime: '2020-09-23T22:10:51Z'
      lastTransitionTime: '2020-09-23T21:39:16Z'
      reason: NewReplicaSetAvailable
      message: ReplicaSet "kong-kong-79745d8fbc" has successfully progressed.
    - type: Available
      status: 'True'
      lastUpdateTime: '2020-09-24T00:01:49Z'
      lastTransitionTime: '2020-09-24T00:01:49Z'
      reason: MinimumReplicasAvailable
      message: Deployment has minimum availability.

helm repo add kong https://charts.konghq.com
helm install kong kong/kong --set ingressController.installCRDs=false --values https://bit.ly/2UAv0ZE --namespace=gateway
hbagdi commented 4 years ago

@Jorgevillada Can you please open a separate issue? There are too many details here and we will mix up two separate issues.

hbagdi commented 4 years ago

If you are using Controller 0.10, please makes sure to add the annotation kubernetes.io/ingress.class: kong to your KongConsumer resource.

SiarheiBortnik commented 4 years ago

Hello guys! Sorry for late reply nevertheless I checked what you asked. I completely removed all resources from kong namespace and delete this namespace in general. Then I created namespace again and deployed only kong with test vault POD with ingress resource. Unfortunately result is the same:

azureuser@k8s-master-13666935-0:~$ kubectl get secret -n kong user-basicauth -o yaml
apiVersion: v1
data:
  kongCredType: YmFzaWMtYXV0aA==
  password: Zm9v
  username: dXNlcg==
kind: Secret
metadata:
  creationTimestamp: "2020-10-06T13:40:48Z"
  name: user-basicauth
  namespace: kong
  resourceVersion: "49766699"
  selfLink: /api/v1/namespaces/kong/secrets/user-basicauth
  uid: d71fa556-1d12-4cf0-95eb-cd5cd96973b6
type: Opaque
______________________________________________________________________________________________________
azureuser@k8s-master-13666935-0:~$ kubectl get kongplugins.configuration.konghq.com -n kong basic-auth -o yaml
apiVersion: configuration.konghq.com/v1
kind: KongPlugin
metadata:
  annotations:
    meta.helm.sh/release-name: kong-config-kong
    meta.helm.sh/release-namespace: kong
  creationTimestamp: "2020-10-06T12:58:36Z"
  generation: 2
  labels:
    app.kubernetes.io/managed-by: Helm
  name: basic-auth
  namespace: kong
  resourceVersion: "49761862"
  selfLink: /apis/configuration.konghq.com/v1/namespaces/kong/kongplugins/basic-auth
  uid: f4e73c34-e39b-402f-b6d9-beca5a46b66f
plugin: basic-auth
______________________________________________________________________________________________________
azureuser@k8s-master-13666935-0:~$ kubectl get kongconsumers.configuration.konghq.com -n kong user  -o yaml
apiVersion: configuration.konghq.com/v1
credentials:
- user-basicauth
kind: KongConsumer
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"configuration.konghq.com/v1","credentials":["user-basicauth"],"kind":"KongConsumer","metadata":{"annotations":{},"name":"user","namespace":"kong"},"username":"user"}
  creationTimestamp: "2020-10-06T13:37:24Z"
  generation: 2
  name: user
  namespace: kong
  resourceVersion: "49767134"
  selfLink: /apis/configuration.konghq.com/v1/namespaces/kong/kongconsumers/user
  uid: 420f5c62-bfdd-468b-944d-4b54faf54666
username: user
______________________________________________________________________________________________________
azureuser@k8s-master-13666935-0:~$ kubectl get secret -n kong user-basicauth -o yaml
apiVersion: v1
data:
  kongCredType: YmFzaWMtYXV0aA==
  password: Zm9v
  username: dXNlcg==
kind: Secret
metadata:
  creationTimestamp: "2020-10-06T13:40:48Z"
  name: user-basicauth
  namespace: kong
  resourceVersion: "49766699"
  selfLink: /api/v1/namespaces/kong/secrets/user-basicauth
  uid: d71fa556-1d12-4cf0-95eb-cd5cd96973b6
type: Opaque
______________________________________________________________________________________________________
azureuser@k8s-master-13666935-0:~$ kubectl get ingress -n kong vault-kong  -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-prod
    konghq.com/plugins: basic-auth
    kubernetes.io/ingress.class: kong-kong
    meta.helm.sh/release-name: vault-kong
    meta.helm.sh/release-namespace: kong
  creationTimestamp: "2020-10-06T13:29:23Z"
  generation: 1
  labels:
    app.kubernetes.io/instance: vault-kong
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: vault
    helm.sh/chart: vault-0.6.0
  name: vault-kong
  namespace: kong
  resourceVersion: "49762551"
  selfLink: /apis/extensions/v1beta1/namespaces/kong/ingresses/vault-kong
  uid: 7ce3da3f-502d-4238-bf62-f336ffd3de25
spec:
  rules:
  - host: example.com
    http:
      paths:
      - backend:
          serviceName: vault-kong
          servicePort: 8200
        path: /
  tls:
  - hosts:
    - example.com
    secretName: letsencrypt-kong-secret
status:
  loadBalancer:
    ingress:
    - ip: 1.1.1.1

And test connection:

azureuser@k8s-master-13666935-0:~$ curl -i -u 'user:foo' https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui
HTTP/2 401
date: Tue, 06 Oct 2020 13:43:48 GMT
content-type: application/json; charset=utf-8
content-length: 48
vary: Origin
access-control-allow-origin: *
x-kong-response-latency: 1
server: kong/2.1.3.0-enterprise-edition

{"message":"Invalid authentication credentials"}
hbagdi commented 4 years ago

This all seems fine. Can you port-forward to port 8444 on the proxy container of the ingress-kong deployment and then execute?

curl -I -v -k https://localhost:8444/basic-auths

@mflendrich @rainest Do you guys seeing anything in here that I'm not catching?

SiarheiBortnik commented 4 years ago

I did it without port forward from k8s master node (it has IP access to PODs)

azureuser@k8s-master-13666935-0:~$ kubectl get pod -n kong -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE                                NOMINATED NODE   READINESS GATES
kong-kong-kong-5b5f6c95dc-jm6zv   2/2     Running   2          18h   10.240.2.9     k8s-agentpool-13666935-vmss00000y   <none>           <none>
kong-kong-kong-5b5f6c95dc-lmfqx   2/2     Running   2          18h   10.240.3.209   k8s-agentpool-13666935-vmss00000t   <none>           <none>
kong-kong-kong-5b5f6c95dc-wr7sh   2/2     Running   2          18h   10.240.7.91    k8s-agentpool-13666935-vmss00001b   <none>           <none>
vault-kong-0                      1/1     Running   0          17h   10.240.8.71    k8s-agentpool-13666935-vmss00001e   <none>           <none>
azureuser@k8s-master-13666935-0:~$ curl -I -v -k https://10.240.2.9:8444/basic-auths
*   Trying 10.240.2.9...
* TCP_NODELAY set
* Connected to 10.240.2.9 (10.240.2.9) port 8444 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/certs/ca-certificates.crt
  CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Unknown (8):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Client hello (1):
* TLSv1.3 (OUT), TLS Unknown, Certificate Status (22):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=San Francisco; O=Kong; OU=IT Department; CN=localhost
*  start date: Oct  6 12:57:01 2020 GMT
*  expire date: Oct  1 12:57:01 2040 GMT
*  issuer: C=US; ST=California; L=San Francisco; O=Kong; OU=IT Department; CN=localhost
*  SSL certificate verify result: self signed certificate (18), continuing anyway.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* Using Stream ID: 1 (easy handle 0x55a4a2fd6580)
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
> HEAD /basic-auths HTTP/2
> Host: 10.240.2.9:8444
> User-Agent: curl/7.58.0
> Accept: */*
>
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS Unknown, Certificate Status (22):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
* TLSv1.3 (OUT), TLS Unknown, Unknown (23):
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
* TLSv1.3 (IN), TLS Unknown, Unknown (23):
< HTTP/2 200
HTTP/2 200
< server: openresty
server: openresty
< date: Wed, 07 Oct 2020 07:13:34 GMT
date: Wed, 07 Oct 2020 07:13:34 GMT
< content-type: text/html; charset=UTF-8
content-type: text/html; charset=UTF-8
< access-control-allow-origin: *
access-control-allow-origin: *
< x-kong-admin-request-id: bKtQtJDoxhYjiXN5vtNRg6kg9CF6jQDV
x-kong-admin-request-id: bKtQtJDoxhYjiXN5vtNRg6kg9CF6jQDV
< vary: Origin
vary: Origin
< x-kong-admin-latency: 0
x-kong-admin-latency: 0

<
* Connection #0 to host 10.240.2.9 left intact

I noticed URL link which you advice and it has basic-auths at he end but in plugin configuration we have basic-auth, is it ok? Reply on https://10.240.2.9:8444/basic-auth also the same.

rainest commented 4 years ago

Hard to sort through the wall-o-config, but it looks like the KongConsumer has no class, and that you use a custom class? On one of the Ingress resources I saw, at least, kubernetes.io/ingress.class: kong-kong, which isn't the default.

You likely want to add a `kubernetes.io/ingress.class: kong-kong annotation to your KongConsumers (which will then load their associated credentials).

SiarheiBortnik commented 4 years ago

Adding such annotation to KongConsumer looks unusual but I did it for test and it works now!

azureuser@k8s-master-13666935-0:~$ kubectl get kongconsumers.configuration.konghq.com -n kong user -o yaml
apiVersion: configuration.konghq.com/v1
credentials:
- user-basicauth
kind: KongConsumer
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"configuration.konghq.com/v1","credentials":["user-basicauth"],"kind":"KongConsumer","metadata":{"annotations":{},"name":"user","namespace":"kong"},"username":"user"}
    kubernetes.io/ingress.class: kong-kong
  creationTimestamp: "2020-10-06T13:37:24Z"
  generation: 2
  name: user
  namespace: kong
  resourceVersion: "52513666"
  selfLink: /apis/configuration.konghq.com/v1/namespaces/kong/kongconsumers/user
  uid: 420f5c62-bfdd-468b-944d-4b54faf54666
username: user

And HTTP request

azureuser@k8s-master-13666935-0:~$ curl -i -u 'user:foo' https://kong-gsc-ingress.northeurope.cloudapp.azure.com/ui
HTTP/2 307
content-type: text/html; charset=utf-8
content-length: 40
cache-control: no-store
location: /ui/
date: Fri, 09 Oct 2020 12:50:18 GMT
vary: Origin
access-control-allow-origin: *
x-kong-upstream-latency: 16
x-kong-proxy-latency: 1
via: kong/2.1.3.0-enterprise-edition

<a href="/ui/">Temporary Redirect</a>.
JingJCV commented 1 year ago

If you are using Controller 0.10, please makes sure to add the annotation kubernetes.io/ingress.class: kong to your KongConsumer resource.

This solved my problem. Thanks