Closed pfcurtis closed 1 year ago
This is strange, I do not have such issue on Kubernetes.
What are you trying to do?
I can see that the default container user has id 1000, but /usr/local/kong
is owned by user kong
, id 100, gid 65533
I have set default user in the Dockerfile to kong
and made a new release. Can you please test?
I will test later today.
On Sat, Jul 25, 2020 at 8:09 PM Cristian Chiru notifications@github.com wrote:
I have set default user in the Dockerfile to kong and made a new release. Can you please test?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/revomatico/docker-kong-oidc/issues/7#issuecomment-663918995, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAPL3XUQWCTNVVCLG2UNOCTR5NX4HANCNFSM4PFZD57Q .
-- Paul Curtis +1 203-539-9705 +44 7562 550869
I am receiving the same errors. Here is more information on the test environment: minikube 1.8.2, Kubernetes v1.16.3 (to match current prod cluster version).
I am using the "kong-ingress-dbless" YAML for deployment
kubectl logs -n kong ingress-kong-fd8c555fd-6ns95 -c proxy
Error: could not prepare Kong prefix at /usr/local/kong: Permission denied
Run with --v (verbose) or --vv (debug) for more details
kubectl describe pod/ingress-kong-fd8c555fd-6ns95 -n kong
Name: ingress-kong-fd8c555fd-6ns95
Namespace: kong
Priority: 0
Node: m01/192.168.64.3
Start Time: Tue, 28 Jul 2020 09:48:15 -0400
Labels: app=ingress-kong
pod-template-hash=fd8c555fd
Annotations: kuma.io/gateway: enabled
prometheus.io/port: 8100
prometheus.io/scrape: true
traffic.sidecar.istio.io/includeInboundPorts:
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/ingress-kong-fd8c555fd
Containers:
proxy:
Container ID: docker://30af9c3725f7c18f3caac203d9f2ee0cb7e467826e7557bb8f7536776841321d
Image: registry.terrapin.com/docker-kong-oidc:2.1.0-1
Image ID: docker-pullable://registry.terrapin.com/docker-kong-oidc@sha256:d58c211776ab2066963009385ed51fe2e38d0c4246e47ed93cce2e6362e49c85
Ports: 8000/TCP, 8443/TCP, 8100/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 28 Jul 2020 09:54:03 -0400
Finished: Tue, 28 Jul 2020 09:54:03 -0400
Ready: False
Restart Count: 6
Liveness: http-get http://:8100/status delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:8100/status delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
KONG_PROXY_LISTEN: 0.0.0.0:8000, 0.0.0.0:8443 ssl http2
KONG_ADMIN_LISTEN: 127.0.0.1:8444 ssl
KONG_STATUS_LISTEN: 0.0.0.0:8100
KONG_DATABASE: off
KONG_NGINX_WORKER_PROCESSES: 1
KONG_ADMIN_ACCESS_LOG: /dev/stdout
KONG_ADMIN_ERROR_LOG: /dev/stderr
KONG_PROXY_ERROR_LOG: /dev/stderr
KONG_X_SESSION_STORAGE: shm
KONG_PLUGINS: bundled,oidc
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-whfxt (ro)
ingress-controller:
Container ID: docker://c45da39b1f0a02413b6e6e4168eb7109e57c5ffffdc730f2ccad8244cb84c483
Image: kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1
Image ID: docker-pullable://kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller@sha256:4651b737a07303dc81a377a8d9679e160d8ba152042a67ccb9c89f305a3d0895
Port: 8080/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Tue, 28 Jul 2020 09:54:03 -0400
Finished: Tue, 28 Jul 2020 09:54:03 -0400
Ready: False
Restart Count: 6
Liveness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=5s timeout=1s period=10s #success=1 #failure=3
Environment:
CONTROLLER_KONG_ADMIN_URL: https://127.0.0.1:8444
CONTROLLER_KONG_ADMIN_TLS_SKIP_VERIFY: true
CONTROLLER_PUBLISH_SERVICE: kong/kong-proxy
POD_NAME: ingress-kong-fd8c555fd-6ns95 (v1:metadata.name)
POD_NAMESPACE: kong (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kong-serviceaccount-token-whfxt (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kong-serviceaccount-token-whfxt:
Type: Secret (a volume populated by a Secret)
SecretName: kong-serviceaccount-token-whfxt
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kong/ingress-kong-fd8c555fd-6ns95 to m01
Normal Started 7m39s (x2 over 7m40s) kubelet, m01 Started container ingress-controller
Warning BackOff 7m32s (x4 over 7m39s) kubelet, m01 Back-off restarting failed container
Normal Pulled 7m18s (x3 over 7m41s) kubelet, m01 Container image "registry.terrapin.com/docker-kong-oidc:2.1.0-1" already present on machine
Normal Started 7m18s (x3 over 7m41s) kubelet, m01 Started container proxy
Normal Pulled 7m18s (x3 over 7m41s) kubelet, m01 Container image "kong-docker-kubernetes-ingress-controller.bintray.io/kong-ingress-controller:0.9.1" already present on machine
Normal Created 7m18s (x3 over 7m41s) kubelet, m01 Created container ingress-controller
Normal Created 7m18s (x3 over 7m41s) kubelet, m01 Created container proxy
Warning BackOff 2m30s (x37 over 7m39s) kubelet, m01 Back-off restarting failed container
I cannot reproduce this... for me works just fine.
Can you please provide the yamls you are loading?
I see the image you are using is not from docker hub, rather another repo (registry.terrapin.com/docker-kong-oidc:2.1.0-1). Are you in fact using a forked build or something?
My kong, in production kuberenetes starts like this:
2020/08/05 09:09:57 [info] 1#0: [lua] openssl.lua:5: using ffi, OpenSSL version linked: 1010107f
2020/08/05 09:09:57 [info] 1#0: [lua] pkey.lua:221: load_key(): jwk decode failed: error decoding JSON from JWK: Expected value but found invalid number at character 1
2020/08/05 09:09:57 [info] 1#0: [lua] pkey.lua:221: load_key(): jwk decode failed: error decoding JSON from JWK: Expected value but found invalid number at character 1
2020/08/05 09:09:57 [info] 1#0: [lua] pkey.lua:221: load_key(): jwk decode failed: error decoding JSON from JWK: Expected value but found invalid number at character 1
2020/08/05 09:09:57 [info] 1#0: [lua] pkey.lua:221: load_key(): jwk decode failed: error decoding JSON from JWK: Expected value but found invalid number at character 1
2020/08/05 09:09:57 [notice] 1#0: using the "epoll" event method
2020/08/05 09:09:57 [notice] 1#0: openresty/1.15.8.3
2020/08/05 09:09:57 [notice] 1#0: built by gcc 9.3.0 (Alpine 9.3.0)
2020/08/05 09:09:57 [notice] 1#0: OS: Linux 5.7.12-050712-generic
2020/08/05 09:09:57 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2020/08/05 09:09:57 [notice] 1#0: start worker processes
2020/08/05 09:09:57 [notice] 1#0: start worker process 22
2020/08/05 09:09:57 [notice] 22#0: *1 [lua] cache.lua:333: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2020/08/05 09:09:57 [notice] 22#0: *1 [lua] cache.lua:333: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
2020/08/05 09:09:57 [notice] 22#0: *1 [kong] init.lua:303 declarative config loaded from /kong_dbless/kong.yml, context: init_worker_by_lua*
2020/08/05 09:09:57 [info] 22#0: *1 [kong] handler.lua:53 [acme] acme renew timer started on worker 0, context: init_worker_by_lua*
But I do not uset as ingress controller, for the time being I am keeping that disabled - so this feature is untested.
If you are using it as an ingress, then dbless config is not used anymore.
I'm getting the same error and I'm using DB less config. I'm also using it as an Ingress. Does it mean dbless config is no longer supported on your docker image?
I am using Kong exclusively with dbless, in Kubernetes. I do not get such error, and I am having a hard time figuring this out.
If you enable ingress, dbless file is ignored.
Perhaps this is related to https://github.com/revomatico/docker-kong-oidc/pull/14?
With v2.0.5-3, the directory "/usr/local/kong" gives a permission denied error when the container is used in a Kubernetes cluster. Checking (and changing) the permissions of that directory resolved the problem with containers not staring in Kubernetes.