Kong / kong-plugin-acme

Let's Encrypt and ACMEv2 integration with Kong - this plugin has been moved into https://github.com/Kong/kong, please open issues and PRs in that repo
Apache License 2.0
36 stars 10 forks source link

Problems with certificate persistence #29

Closed alexandruhog closed 4 years ago

alexandruhog commented 4 years ago

Hello!

I am trying to couple Kong with Vault for certificate persistent storage. I managed to spin up the Vault and Kong services, I got a certificate, so far, so good, but I needed to restart the Kong service and my certificate was lost. Could someone tell me why this happened?

The stack file:

version: "3.8"

services:
  vault:
    image: vault 
    volumes:
      - vault-config-nfs:/vault/config
      - vault-file-nfs:/vault/file
  reverse-proxy:
    image: kong:latest
    volumes:
      - load-balancer-kong-nfs:/usr/local/kong/declarative
    ports:
      - 80:8000
      - 443:8443
      - 8001:8001
      - 8444:8444
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: /usr/local/kong/declarative/kong.yml
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_LUA_SSL_TRUSTED_CERTIFICATE: /etc/ssl/cert.pem
    deploy:
      placement:
        constraints: [node.role == manager]

volumes:
  load-balancer-kong-nfs:
    driver: nfs
    driver_opts:
      share: nfs-ip/load-balancer/kong-config
  vault-config-nfs:
    driver: nfs
    driver_opts:
      share: nfs-ip/vault/config
  vault-file-nfs:
    driver: nfs
    driver_opts:
      share: nfs-ip/vault/file

The kong.yml file:

_format_version: "1.1"
services:
  - name: dummy-lb
    url: http://dummy_api-gateway:8000
    routes:
      - name: dummy-api-gateway
        hosts:
          - dns1.dummy.com
          - dns2.dummy.com
        preserve_host: true
        paths:
          - /
plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      domains:
        - dns1.dummy.com
        - dns2.dummy.com
      tos_accepted: true
      storage_config:
        vault:
          host: vault
          port: 8200
          kv_path: acme
          token: nil  
          timeout: 2000

The Vault config file:

{
    "backend": {
        "file": {
            "path": "/vault/file"
        }
    },
    "default_lease_ttl": "336h",
    "max_lease_ttl": "8760h",
    "disable_mlock": true
}

I got my cert using the command from the examample: curl https://dns1.dummy.com -k but after i restarted my KONG service, the certificate was gone. I thought I have to use Vault because it persists the certificate.

Please help me! Thank you in advance!

fffonion commented 4 years ago

@alexandruhog could you check in the vault kv path acme and see if there's keys under that path?

alexandruhog commented 4 years ago

Unfortunately I stopped that service too, because I needed to revert to my old config with static cert, because I reached the 5 weekly cert limit. But as far as I can remember from the logs yea, there was a key created with the "acme" preffix. And I do believe that's right, because the certificate also worked, until I restarted the kong service

fffonion commented 4 years ago

@alexandruhog you could test using the staging environment API with higher rate limiting, and switch to prod API after everything is working. I'd suggest poke the vault kv API when kong is started and unable to find the key and also set KONG_LOG_LEVEL to debug. It's unclear to say where to start debugging right now.

alexandruhog commented 4 years ago

Ok, I will do this and return with fresh news, thanks

alexandruhog commented 4 years ago

Hello again @fffonion . I swapped Vault with Redis, enabled the staging api for letsencrypt and tried to generate a new certificate. This time, another error arose:

2020/06/16 09:17:21 [error] 23#0: *5180 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/resty/acme/client.lua:117: attempt to index field 'directory' (a nil value)
stack traceback:
coroutine 0:
/usr/local/share/lua/5.1/resty/acme/client.lua: in function 'init'
/usr/local/share/lua/5.1/kong/plugins/acme/client.lua:82: in function 'order'
/usr/local/share/lua/5.1/kong/plugins/acme/client.lua:214: in function 'update_certificate'
/usr/local/share/lua/5.1/kong/plugins/acme/handler.lua:102: in function </usr/local/share/lua/5.1/kong/plugins/acme/handler.lua:101>, context: ngx.timer, client: 10.0.0.2, server: 0.0.0.0:8443

stack file:

version: "3.8"
services:
  redis:
    image: redis 
    volumes:
      - redis-file-nfs:/data
    command: ["redis-server", "--appendonly", "yes"]
  reverse-proxy:
    image: kong:latest
    volumes:
      - load-balancer-kong-nfs:/usr/local/kong/declarative
    ports:
      - 80:8000
      - 443:8443
      - 8001:8001
      - 8444:8444
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: /usr/local/kong/declarative/kong.yml
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_LUA_SSL_TRUSTED_CERTIFICATE: /etc/ssl/cert.pem
    deploy:
      placement:
        constraints: [node.role == manager]

volumes:
  load-balancer-kong-nfs:
    driver: nfs
    driver_opts:
      share: nfs-ip/load-balancer/kong-config
  redis-file-nfs:
    driver: nfs
    driver_opts:
      share: nfs-ip/load-balancer/redis

kong.yml file:

_format_version: "1.1"
services:
  - name: dummy-lb
    url: http://dummy_api-gateway:8000
    routes:
      - name: dummy-api-gateway
        hosts:
          - dns1.dummy.com
          - dns2.dummy.com
        preserve_host: true
        paths:
          - /
plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      api_uri: https://acme-staging-v02.api.letsencrypt.org/directory
      domains:
        - dns1.dummy.com
        - dns2.dummy.com
      tos_accepted: true
      storage_config:
        redis:
          host: redis
          port: 6379
          database: 0
          auth: nil

Any ideas why?

UPDATE: I tested again, on my local machine, using the PRODUCTION letsencrypt API url and it worked. I believe it's from this uri https://acme-staging-v02.api.letsencrypt.org/directory. If it is so, what's the correct URL?

UPDATE 2: On local dev, i observed that everytime i run this command echo q |openssl s_client -connect localhost -port 8443 -servername $NGROK_HOST 2>/dev/null |openssl x509 -text -noout, a new certificate is created, until the limit is reached. Why does Kong ACME create new certificates instead of using the already created ones? I have to mention that my storage option is redis. Indeed it is persistent between container restarts, but I can't understand why this renewal process does happen?

alexandruhog commented 4 years ago

@fffonion any updates?

fffonion commented 4 years ago

there're multiple problems, let's figure out one at a time:

2020/06/16 09:17:21 [error] 23#0: *5180 lua entry thread aborted: runtime error: /usr/local/share/lua/5.1/resty/acme/client.lua:117: attempt to index field 'directory' (a nil value)
stack traceback:

for this issue did you notice any previous error log, especially when the worker starts?

alexandruhog commented 4 years ago

@fffonion nope, at all, but right now, my biggest concern is with the second update in my answer: when pairing with stateful redis, why does the plugin renews the certs (until the rate limit is hit) with every request?

fffonion commented 4 years ago

@alexandruhog No problem, let me run some tests around that.

alexandruhog commented 4 years ago

@fffonion Sure mate, you can maybe use my local testing files:

version: "3.8"

services:
  redis:
    image: redis
    volumes:
      - ./redis-data:/data
    command: ["redis-server", "--appendonly", "yes"]
  reverse-proxy:
    image: kong:latest
    volumes:
      - './kong:/usr/local/kong/declarative'
    ports:
      - 8000:8000
      - 8443:8443
      - 8001:8001
      - 8444:8444
    environment:
      KONG_DATABASE: 'off'
      KONG_DECLARATIVE_CONFIG: /usr/local/kong/declarative/kong.yml
      KONG_PROXY_ACCESS_LOG: /dev/stdout
      KONG_ADMIN_ACCESS_LOG: /dev/stdout
      KONG_PROXY_ERROR_LOG: /dev/stderr
      KONG_ADMIN_ERROR_LOG: /dev/stderr
      KONG_ADMIN_LISTEN: 0.0.0.0:8001, 0.0.0.0:8444 ssl
      KONG_LUA_SSL_TRUSTED_CERTIFICATE: /etc/ssl/cert.pem
      KONG__DEBUG_LEVEL: 'info'
    deploy:
      placement:
        constraints: [node.role == manager]
_format_version: "1.1"

services:
  - name: mock-service
    url: http://mockbin.org
    routes:
      - name: mock-route
        hosts:
          - 639256d167d0.ngrok.io
        paths:
          - /

plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      domains:
        - 639256d167d0.ngrok.io
      tos_accepted: true
      storage_config:
        redis:
          host: redis
          port: 6379
          database: 0
          auth: 'null'
alexandruhog commented 4 years ago

@fffonion Any updates? I would really love to start using this plugin in production ^_^

fffonion commented 4 years ago

@alexandruhog Hi sorry I forgot to post here, I tried the redis storage and it do seems working on my side. Could you check the following for me:

alexandruhog commented 4 years ago

Hello dear @fffonion. The only thing that is outputed, is this one:

reverse-proxy_1  | 172.19.0.1 - - [19/Jun/2020:13:36:52 +0000] "q" 400 12 "-" "-"
reverse-proxy_1  | 172.19.0.1 - - [19/Jun/2020:13:36:56 +0000] "GET /.well-known/acme-challenge/b4wLHDSK5eDPXjZ1Xf9XGm6jnUCIkOz8L9OEKXxEKt4 HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 172.19.0.1 - - [19/Jun/2020:13:36:56 +0000] "GET /.well-known/acme-challenge/b4wLHDSK5eDPXjZ1Xf9XGm6jnUCIkOz8L9OEKXxEKt4 HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 172.19.0.1 - - [19/Jun/2020:13:36:56 +0000] "GET /.well-known/acme-challenge/b4wLHDSK5eDPXjZ1Xf9XGm6jnUCIkOz8L9OEKXxEKt4 HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 172.19.0.1 - - [19/Jun/2020:13:36:56 +0000] "GET /.well-known/acme-challenge/b4wLHDSK5eDPXjZ1Xf9XGm6jnUCIkOz8L9OEKXxEKt4 HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
fffonion commented 4 years ago

looks like those are all access logs (outputed to stdout), maybe look at error logs (sent to stderr)?

alexandruhog commented 4 years ago

@fffonion docker compose outputs both STDOUT and STDERR errors. This is everything I've got. If you want, we can talk over Google Hangouts/ Skype/ Slack/ whatever u want to be more productive

fffonion commented 4 years ago

@alexandruhog Let's do that here so other people can read as well in the future. let's do this, add this to your environments

KONG_LOG_LEVEL=debug

alexandruhog commented 4 years ago

yea, it says this:

reverse-proxy_1  | 2020/06/19 13:46:28 [debug] 26#0: *1331 [lua] client.lua:436: order_certificate(): order is completed: https://acme-v02.api.letsencrypt.org/acme/order/89243971/3835920738
fffonion commented 4 years ago

okay this is good. let's move to next step. when you observe that kong creates a new cert every time you do openssl s_client, does this line got produced everytime? and let's check the keys that are in redis once.

alexandruhog commented 4 years ago

If I do not restart the container, no. But if I do a container restart, yes, it says this again:

reverse-proxy_1  | 2020/06/19 13:51:11 [debug] 22#0: *147 [lua] client.lua:436: order_certificate(): order is completed: https://acme-v02.api.letsencrypt.org/acme/order/89244370/3835966760
alexandruhog commented 4 years ago

And now, the rate limiter hit again :) I re-started the Kong container (without restarting the Redis one).

fffonion commented 4 years ago

I'm not testing in a container but i do try restarted kong and didn't reproduce it. it shouldn't matter whether it's in docker or not since it's stateless on kong side. i'd still suspect the persistence got lost with redis container. when you restart kong container, could you connect to the redis container with redis-cli and run KEYS * to list all keys inside?

alexandruhog commented 4 years ago

Indeed, KEYS * gave me an empty array

fffonion commented 4 years ago

okay, so if you do saw a valid cert being returned at one time, then it somehow got lost. i'd suggest to run keys * before kong restart and verify if there's keys being created. and if it does, maybe we should tweak the redis config. since you are using appendonly mode, i'd check if the appendonly.aof file is empty or in wrong place.

alexandruhog commented 4 years ago

the appendonly.aof is, indeed, empty everytime

fffonion commented 4 years ago

we should probably check with redis container logs then

alexandruhog commented 4 years ago

oddly enough, there is no logs outputted for the redis container.

alexandruhog commented 4 years ago

I listed the databases and the keyspaces. 16 databases created, but no keyspace. I believe the connection between kong and redis is broken, but I can't figure out exactly who is the culprit. May it be my config settings?

plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      domains:
        - 7a314d2cb9d6.ngrok.io
      tos_accepted: true
      storage_config:
        redis:
          host: redis
          port: 6379
          database: 0
          auth: 'null'
fffonion commented 4 years ago

the plugin do need to read from redis to get the certificate, so if you do saw a valid cert being returned, then the connection between kong and redis should be fine.

alexandruhog commented 4 years ago

Yea, everytime I run that openssl s_client command, a valid cert is being returned. However, 0 keyspace, and the appendonly.aof file is empty. What could be the cause?

alexandruhog commented 4 years ago

redis's persistance also works. I manually set a key through redis-cli, it was written inside the file and it persists through container removal and restart. So, again, I belive the culprit is the way Kong interacts with redis

alexandruhog commented 4 years ago

Are you sure you don't want to directly talk to me over another medium? I can resume what action did we take to fix this so people will find out. I still believe it's the most optimal way to communicate tho

fffonion commented 4 years ago

probably not at this time, still got some distractions IRL 😄

alexandruhog commented 4 years ago

Ok then, please tell me if I can provide you some more input, or is it enough for now?

alexandruhog commented 4 years ago

@fffonion hello mate, any updates?

fffonion commented 4 years ago

My suggestion for further debugging will be:

alexandruhog commented 4 years ago

Hello @fffonion well, the Redis server works, because I managed to enter inside it and issue store, load and delete commands. However, no output from terminal is shown when the 2 containers (kong and redis) are brought up. It seems that the kong container does not communicate with the Redis container. I managed, however, to ping the redis container from within the kong contianer. So there is no problem regarding discovery or intranet talking.

Could you, please, set up a test environment where a Kong container maganes to talk with a Redis container?

alexandruhog commented 4 years ago

@fffonion this is the full log:

❯ docker-compose up --build
WARNING: Some services (reverse-proxy) use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration - use `docker stack deploy` to deploy to a swarm.
WARNING: The Docker Engine you're using is running in swarm mode.

Compose does not use swarm mode to deploy services to multiple nodes in a swarm. All containers will be scheduled on the current node.

To deploy your application across the swarm, use `docker stack deploy`.

Creating network "local_default" with the default driver
Creating local_reverse-proxy_1 ... done                                                                                                                                                                                  Creating local_redis_1         ... done
                                                                                                                                                                                 Attaching to local_redis_1, local_reverse-proxy_1
redis_1          | 1:C 25 Jun 2020 13:35:43.747 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1          | 1:C 25 Jun 2020 13:35:43.747 # Redis version=6.0.5, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1          | 1:C 25 Jun 2020 13:35:43.747 # Configuration loaded
redis_1          | 1:M 25 Jun 2020 13:35:43.753 * Running mode=standalone, port=6379.
redis_1          | 1:M 25 Jun 2020 13:35:43.753 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1          | 1:M 25 Jun 2020 13:35:43.753 # Server initialized
redis_1          | 1:M 25 Jun 2020 13:35:43.753 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1          | 1:M 25 Jun 2020 13:35:43.755 * Ready to accept connections
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] globalpatches.lua:10: installing the globalpatches
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] globalpatches.lua:269: randomseed(): random seed: 198891328201 for worker nb 0
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:449: init(): [dns-client] (re)configuring dns client
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:454: init(): [dns-client] staleTtl = 4
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:457: init(): [dns-client] validTtl = nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:461: init(): [dns-client] noSynchronisation = false
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:480: init(): [dns-client] query order = LAST, SRV, A, CNAME
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:520: init(): [dns-client] adding A-record from 'hosts' file: 7f0b9deef8b6 = 172.28.0.3
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [ff00::0]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:520: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [ff02::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [ff02::2]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:579: init(): [dns-client] nameserver 127.0.0.11
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:584: init(): [dns-client] attempts = 5
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:593: init(): [dns-client] timeout = 2000 ms
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:597: init(): [dns-client] ndots = 0
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:599: init(): [dns-client] search =
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:605: init(): [dns-client] badTtl = 1 s
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:607: init(): [dns-client] emptyTtl = 30 s
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:125: check_db_against_config(): Discovering used plugins
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:449: init(): [dns-client] (re)configuring dns client
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:454: init(): [dns-client] staleTtl = 4
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:457: init(): [dns-client] validTtl = nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:461: init(): [dns-client] noSynchronisation = false
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:480: init(): [dns-client] query order = LAST, SRV, A, CNAME
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:520: init(): [dns-client] adding A-record from 'hosts' file: 7f0b9deef8b6 = 172.28.0.3
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [ff00::0]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:520: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [ff02::1]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:535: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [ff02::2]
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:579: init(): [dns-client] nameserver 127.0.0.11
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:584: init(): [dns-client] attempts = 5
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:593: init(): [dns-client] timeout = 2000 ms
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:597: init(): [dns-client] ndots = 0
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:599: init(): [dns-client] search =
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:605: init(): [dns-client] badTtl = 1 s
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] client.lua:607: init(): [dns-client] emptyTtl = 30 s
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: correlation-id
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: pre-function
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: cors
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: ldap-auth
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: loggly
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: hmac-auth
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'hmac-auth.hmacauth_credentials'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: zipkin
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: request-size-limiting
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: azure-functions
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: request-transformer
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: oauth2
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_credentials'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_authorization_codes'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'oauth2.oauth2_tokens'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: response-transformer
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: ip-restriction
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: statsd
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: jwt
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'jwt.jwt_secrets'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: proxy-cache
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: basic-auth
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'basic-auth.basicauth_credentials'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: key-auth
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'key-auth.keyauth_credentials'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: http-log
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: datadog
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: tcp-log
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: rate-limiting
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: post-function
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: prometheus
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: acl
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'acl.acls'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: syslog
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: file-log
reverse-proxy_1  | 2020/06/25 13:35:44 [info] 1#0: [lua] openssl.lua:5: using ffi, OpenSSL version linked: 1010106f
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: acme
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'acme.acme_storage'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: udp-log
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: response-ratelimiting
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [kong] iam-ecs-credentials.lua:31 No ECS environment variables found for IAM
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: aws-lambda
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: session
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:209: loader_fn(): Loading custom plugin entity: 'session.sessions'
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: bot-detection
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 1#0: [lua] plugins.lua:247: load_plugin(): Loading plugin: request-termination
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: using the "epoll" event method
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: openresty/1.15.8.3
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: built by gcc 9.2.0 (Alpine 9.2.0)
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: OS: Linux 4.19.76-linuxkit
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: start worker processes
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: start worker process 22
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: start worker process 23
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: start worker process 24
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 1#0: start worker process 25
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] globalpatches.lua:269: randomseed(): random seed: 866410624421 for worker nb 0
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *2 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *2 [lua] globalpatches.lua:269: randomseed(): random seed: 158132238224 for worker nb 3
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=22, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *2 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=25, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *3 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *3 [lua] globalpatches.lua:269: randomseed(): random seed: 217158872331 for worker nb 1
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *4 [lua] globalpatches.lua:243: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *4 [lua] globalpatches.lua:269: randomseed(): random seed: 439320121114 for worker nb 2
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *3 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=23, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *4 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 22#0: *1 [lua] cache.lua:321: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=25, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=23, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 22#0: *1 [lua] cache.lua:321: purge(): [DB cache] purging (local) cache, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 24#0: *4 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.001 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 25#0: *2 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.001 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 23#0: *3 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.001 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 24#0: *4 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.002 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 25#0: *2 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.002 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [notice] 22#0: *1 [kong] init.lua:284 declarative config loaded from /usr/local/kong/declarative/kong.yml, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 23#0: *3 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.002 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 24#0: *4 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.004 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 25#0: *2 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.004 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 23#0: *3 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.004 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] counter.lua:50: new(): start timer for shdict kong on worker 0
reverse-proxy_1  | 2020/06/25 13:35:44 [info] 22#0: *1 [kong] handler.lua:53 [acme] acme renew timer started on worker 0, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [warn] 24#0: *4 [lua] globalpatches.lua:52: sleep(): executing a blocking 'sleep' (0.008 seconds), context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *2 [lua] counter.lua:50: new(): start timer for shdict kong on worker 3
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *1 [lua] counter.lua:50: new(): start timer for shdict prometheus_metrics on worker 0
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *3 [lua] counter.lua:50: new(): start timer for shdict kong on worker 1
reverse-proxy_1  | 2020/06/25 13:35:44 [info] 25#0: *2 [kong] handler.lua:53 [acme] acme renew timer started on worker 3, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *2 [lua] counter.lua:50: new(): start timer for shdict prometheus_metrics on worker 3
reverse-proxy_1  | 2020/06/25 13:35:44 [info] 23#0: *3 [kong] handler.lua:53 [acme] acme renew timer started on worker 1, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *3 [lua] counter.lua:50: new(): start timer for shdict prometheus_metrics on worker 1
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *4 [lua] counter.lua:50: new(): start timer for shdict kong on worker 2
reverse-proxy_1  | 2020/06/25 13:35:44 [info] 24#0: *4 [kong] handler.lua:53 [acme] acme renew timer started on worker 2, context: init_worker_by_lua*
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *4 [lua] counter.lua:50: new(): start timer for shdict prometheus_metrics on worker 2
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 22#0: *5 [lua] balancer.lua:776: init(): initialized 0 balancer(s), 0 error(s)
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 25#0: *6 [lua] balancer.lua:776: init(): initialized 0 balancer(s), 0 error(s)
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 23#0: *7 [lua] balancer.lua:776: init(): initialized 0 balancer(s), 0 error(s)
reverse-proxy_1  | 2020/06/25 13:35:44 [debug] 24#0: *8 [lua] balancer.lua:776: init(): initialized 0 balancer(s), 0 error(s)
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 24#0: *9 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 24#0: *9 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 23#0: *11 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 23#0: *11 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 23#0: *11 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 25#0: *13 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=23, data=nil
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 25#0: *13 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=resty-worker-events, event=started, pid=24, data=nil
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 25#0: *13 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_core_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:35:45 [debug] 25#0: *13 [lua] events.lua:211: do_event_json(): worker-events: handling event; source=mlcache, event=mlcache:purge:kong_db_cache, pid=22, data=
reverse-proxy_1  | 2020/06/25 13:38:18 [debug] 23#0: *2458 [lua] certificate.lua:29: log(): [ssl] no SNI provided by client, serving default SSL certificate
reverse-proxy_1  | 2020/06/25 13:38:18 [debug] 23#0: *2459 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:18 [debug] 23#0: *2459 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:18 [debug] 23#0: *2457 [lua] init.lua:822: balancer(): setting address (try 1): 3.12.133.32:80
reverse-proxy_1  | 172.28.0.1 - - [25/Jun/2020:13:38:19 +0000] "GET / HTTP/2.0" 200 10695 "-" "curl/7.58.0"
reverse-proxy_1  | 2020/06/25 13:38:19 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/new-acct","jwk":{"n":"q05RDbEYjHdHdjQluCrJBwrh8DiQiFAFVolimF-0q1fOUwicKvur33ghvye3kUXKtyxGfhsPNYTDMlNYlF9sJZp2c9a22VIEDvkthnuQl24QGem7TFeHCEiUB6vK6zIYBpxL59XtG-_Z0wujCruDlYgfb1N2GpeKYmWSWfplZ8FtuE9yaWiFHn9EgH8KeO4BBojQU52aACcKkZk5Ag0IKYRkgErMIpXm7NLs3w8d6A8wogLbZaE9kWucjgXkz4ME6KOcuJPepg_RHDk_ZmaqlZmklYgg5ZtPx3G7zx-Q9nkoWNHoZfZeBKTXkm3j6eKW2-LTjwJ_sf5dfVMpwgkfprFtap07xG9DJXiiyrMiwclwCyA3FNu206py3AGRlRnyCnO3zfRFmjIIJJWekCIogan5i0Xz3VVQqnlqUS2UTP26j9ZLU1ZXLcnIo8rI4dIpYoFu4aGrVqpjQzaebD-Px9oJS9nZdQ6Apd8uJO-EvINNa9uGVJWktMnX9eqFRYMEggZHIBPhft9sGjnj97qE7MEaeafT-Y4Ma_64ssKKCMWO7X0bI7vWu5GUKXXA8Yye6biQfHbHiy5JSwGRBGpIBcF7Ev47bV_U7P8jobYNXvo3tp1L8Rjcd3QZxIy23MUqrUi1Yr1lK-OLYi2v512mLKnrmsr0sPjPxmmPLzw53xU","kty":"RSA","e":"AQAB"},"alg":"RS256","nonce":"0102nNq2xMMPQcnPo8-hXFmt0KnozTMKPMDSokh9xRlKhlE"},"payload":{"termsOfServiceAgreed":true,"contact":["mailto:mail@dummy.com"]}}
reverse-proxy_1  | 2020/06/25 13:38:19 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/new-acct response: {
reverse-proxy_1  |   "key": {
reverse-proxy_1  |     "kty": "RSA",
reverse-proxy_1  |     "n": "q05RDbEYjHdHdjQluCrJBwrh8DiQiFAFVolimF-0q1fOUwicKvur33ghvye3kUXKtyxGfhsPNYTDMlNYlF9sJZp2c9a22VIEDvkthnuQl24QGem7TFeHCEiUB6vK6zIYBpxL59XtG-_Z0wujCruDlYgfb1N2GpeKYmWSWfplZ8FtuE9yaWiFHn9EgH8KeO4BBojQU52aACcKkZk5Ag0IKYRkgErMIpXm7NLs3w8d6A8wogLbZaE9kWucjgXkz4ME6KOcuJPepg_RHDk_ZmaqlZmklYgg5ZtPx3G7zx-Q9nkoWNHoZfZeBKTXkm3j6eKW2-LTjwJ_sf5dfVMpwgkfprFtap07xG9DJXiiyrMiwclwCyA3FNu206py3AGRlRnyCnO3zfRFmjIIJJWekCIogan5i0Xz3VVQqnlqUS2UTP26j9ZLU1ZXLcnIo8rI4dIpYoFu4aGrVqpjQzaebD-Px9oJS9nZdQ6Apd8uJO-EvINNa9uGVJWktMnX9eqFRYMEggZHIBPhft9sGjnj97qE7MEaeafT-Y4Ma_64ssKKCMWO7X0bI7vWu5GUKXXA8Yye6biQfHbHiy5JSwGRBGpIBcF7Ev47bV_U7P8jobYNXvo3tp1L8Rjcd3QZxIy23MUqrUi1Yr1lK-OLYi2v512mLKnrmsr0sPjPxmmPLzw53xU",
reverse-proxy_1  |     "e": "AQAB"
reverse-proxy_1  |   },
reverse-proxy_1  |   "contact": [
reverse-proxy_1  |     "mailto:mail@dummy.com"
reverse-proxy_1  |   ],
reverse-proxy_1  |   "initialIp": "82.137.16.16",
reverse-proxy_1  |   "createdAt": "2020-06-25T13:38:20.707725949Z",
reverse-proxy_1  |   "status": "valid"
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:20 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/new-order","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"0102YBsCJCfL38CTawgA43FYfgJl8O59-yiHUGOyVS32v_0"},"payload":{"identifiers":[{"value":"13aafdb933cb.ngrok.io","type":"dns"}]}}
reverse-proxy_1  | 2020/06/25 13:38:20 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/new-order response: {
reverse-proxy_1  |   "status": "pending",
reverse-proxy_1  |   "expires": "2020-07-02T13:38:21.631549666Z",
reverse-proxy_1  |   "identifiers": [
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "dns",
reverse-proxy_1  |       "value": "13aafdb933cb.ngrok.io"
reverse-proxy_1  |     }
reverse-proxy_1  |   ],
reverse-proxy_1  |   "authorizations": [
reverse-proxy_1  |     "https://acme-v02.api.letsencrypt.org/acme/authz-v3/5470452047"
reverse-proxy_1  |   ],
reverse-proxy_1  |   "finalize": "https://acme-v02.api.letsencrypt.org/acme/finalize/89732664/3920796694"
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:20 [debug] 23#0: *2459 [lua] client.lua:333: order_certificate(): new order: {"identifiers":[{"value":"13aafdb933cb.ngrok.io","type":"dns"}],"expires":"2020-07-02T13:38:21.631549666Z","finalize":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/finalize\/89732664\/3920796694","status":"pending","authorizations":["https:\/\/acme-v02.api.letsencrypt.org\/acme\/authz-v3\/5470452047"]}
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/authz-v3\/5470452047","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"0101jOxDPxgU35UfTDivx1SZpyX9eUvCdXAy3U8EhCSyzYk"}}
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/authz-v3/5470452047 response: {
reverse-proxy_1  |   "identifier": {
reverse-proxy_1  |     "type": "dns",
reverse-proxy_1  |     "value": "13aafdb933cb.ngrok.io"
reverse-proxy_1  |   },
reverse-proxy_1  |   "status": "pending",
reverse-proxy_1  |   "expires": "2020-07-02T13:38:21Z",
reverse-proxy_1  |   "challenges": [
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "http-01",
reverse-proxy_1  |       "status": "pending",
reverse-proxy_1  |       "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/5470452047/pi5qCg",
reverse-proxy_1  |       "token": "fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM"
reverse-proxy_1  |     },
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "dns-01",
reverse-proxy_1  |       "status": "pending",
reverse-proxy_1  |       "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/5470452047/-sicGg",
reverse-proxy_1  |       "token": "fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM"
reverse-proxy_1  |     },
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "tls-alpn-01",
reverse-proxy_1  |       "status": "pending",
reverse-proxy_1  |       "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/5470452047/sBxk5Q",
reverse-proxy_1  |       "token": "fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM"
reverse-proxy_1  |     }
reverse-proxy_1  |   ]
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 23#0: *2459 [lua] client.lua:365: order_certificate(): register challenge http-01: fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/chall-v3\/5470452047\/pi5qCg","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"0102q48UBYksWLsBoQN6WaA6rTw_vYryk9b9Pho_hDQraek"},"payload":{}}
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/chall-v3/5470452047/pi5qCg response: {
reverse-proxy_1  |   "type": "http-01",
reverse-proxy_1  |   "status": "pending",
reverse-proxy_1  |   "url": "https://acme-v02.api.letsencrypt.org/acme/chall-v3/5470452047/pi5qCg",
reverse-proxy_1  |   "token": "fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM"
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 24#0: *2518 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 24#0: *2518 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 24#0: *2518 [lua] http-01.lua:44: serve_challenge(): token is fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM
reverse-proxy_1  | 172.28.0.1 - - [25/Jun/2020:13:38:21 +0000] "GET /.well-known/acme-challenge/fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 25#0: *2519 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 25#0: *2519 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:21 [debug] 25#0: *2519 [lua] http-01.lua:44: serve_challenge(): token is fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM
reverse-proxy_1  | 172.28.0.1 - - [25/Jun/2020:13:38:21 +0000] "GET /.well-known/acme-challenge/fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2532 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2532 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2532 [lua] http-01.lua:44: serve_challenge(): token is fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM
reverse-proxy_1  | 172.28.0.1 - - [25/Jun/2020:13:38:22 +0000] "GET /.well-known/acme-challenge/fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2533 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2533 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 22#0: *2533 [lua] http-01.lua:44: serve_challenge(): token is fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM
reverse-proxy_1  | 172.28.0.1 - - [25/Jun/2020:13:38:22 +0000] "GET /.well-known/acme-challenge/fAZfHm5rzuSf6WD8yaWzrZazkSP8iKDO0nWK44OxfVM HTTP/1.1" 200 99 "-" "Mozilla/5.0 (compatible; Let's Encrypt validation server; +https://www.letsencrypt.org)"
reverse-proxy_1  | 2020/06/25 13:38:22 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/order\/89732664\/3920796694","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"0102Cvkz4_peJcCvUB19x46MOWs80f8AfwtitRXcJgpeeng"}}
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/order/89732664/3920796694 response: {
reverse-proxy_1  |   "status": "ready",
reverse-proxy_1  |   "expires": "2020-07-02T13:38:21Z",
reverse-proxy_1  |   "identifiers": [
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "dns",
reverse-proxy_1  |       "value": "13aafdb933cb.ngrok.io"
reverse-proxy_1  |     }
reverse-proxy_1  |   ],
reverse-proxy_1  |   "authorizations": [
reverse-proxy_1  |     "https://acme-v02.api.letsencrypt.org/acme/authz-v3/5470452047"
reverse-proxy_1  |   ],
reverse-proxy_1  |   "finalize": "https://acme-v02.api.letsencrypt.org/acme/finalize/89732664/3920796694"
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] client.lua:387: order_certificate(): check challenge: {"identifiers":[{"value":"13aafdb933cb.ngrok.io","type":"dns"}],"expires":"2020-07-02T13:38:21Z","finalize":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/finalize\/89732664\/3920796694","status":"ready","authorizations":["https:\/\/acme-v02.api.letsencrypt.org\/acme\/authz-v3\/5470452047"]}
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] pkey.lua:157: load_pkey(): load key using fmt: *, type: *
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] pkey.lua:177: load_pkey(): loaded pkey using PEM_read_bio_PrivateKey
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/finalize\/89732664\/3920796694","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"01017x31x-5pIHfVm4UiGk7W5kJCHAEjEBq715WJ6cysXSs"},"payload":{"csr":"MIIEZTCCAk0CAQAwIDEeMBwGA1UEAwwVMTNhYWZkYjkzM2NiLm5ncm9rLmlvMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAyRW_0AKQL0L4-KqSesUOyZTamJ5xfJ552ty7ir5KWszk42AE5r66AQVZCHeHUCm3EWQPfuWGRvrWyOToc6vLjwTjBWmtCUAbxjwbgjDgkeVrqNzD5fdc9tOPhkJkdmqmyi0fSDFswC8ZE5y0cHLonRsredKFKFpuwN45gEtP0rucwm_8GcF_6rxjHDXphXVFiUr6EIvXTC5DkWQcBRfPjZUFTDyxSj6AKD6crURL1sv9t8Tu6BnTqpL7iJXgdCr8GJshuC5nGf8NwddEFjXjSEYUDUuMsUgFS8zECjw6JeCEk6IMWDqBl3miiQam6bcr8fkOixs5cd7oi-_zTOwd-vvY1sKH_5w8juVYqi504bGvtB_sV1CjzcYYcndddBzQDrEB8F69vxYRqvOYTM6S3_DO-Qs7yLFs8qqplZyXuAHFHKycJjL0l_azYDcFScLXuSxxBoxFCMGuDk2zJ1Y4wBCEpsSQIslXjd-JWagXfgw-5kN_NaOb8joiuTeRYhiKzcnbOtFA6y6tNiKY_7OnxB9CrhnzQQlf0eVf5GpGVrvyOLV5CrLaKeHKP8NUtV6WuWXwf826dfvS3oWSysFO-qsVKnIu8lB4cC9_7pt6wcnQlYxI_WkI9Inx89XuotTCHN0ER9EwuitiOUL9RTfi1znQVOCwx49GQKd9-ZKudxMCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4ICAQApeLNMLugMXlQma5f6LhxR6ceYC6pzWidQp6BvFNqWF4saUPfy8B39f4C0udA2Zetgm2xeie47jGUdeQJSQm-o1NCV_vX8WvE-zXsHOjDk0ojG_OvgDOJ15TwiH3RlnQcQ_7kDJzzJiejs8qITnaeth2eojWKpm20o-MtKMIAYuv2R7ROubWyxekXfC7W_q1hJ6Rjy-cVkVcLUXPlisbHbz5lmmY3hUzhtgLQCjNd8W5seeNVLphcoIt_kTSBBPJ9QCNJyhNtxhRvp9702jluk46zW3ANDDji5c-PyvIBzEZRAavAlPckB-uIdTQkTYnAjaOaXP5T07JY7oORiHR-VYcuem_Iq_HaEJNypdr3n853ePFbPfix2MPBXFHYpQX_-AMv-Bkf_gKVs3WcD_mRKOlgNPkHtHXtxY8H-SVVOvxFrTctyhKqu-xlIbJwFQhUS0IrSxjj8yfePcu2cG3_dAdw49LfFrUSG_Xcf3mItgVpMb-Hflc0j_rNbREkc7uXn-XJ4qWyiMV7SfHYSs4nbOIqLV5FybvlXKmPVXsn5V4XD9Y_MHKQMIQlEMR4QWJo50XID56asZxAWR495J4mVdliYzitJXVdTga017wcRQJilVczokkvfwuqN8HDsX3gsUKeLT6RLVRKsw16tzIsP85TIacjN9Rx4FlDUlchx4g"}}
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/finalize/89732664/3920796694 response: {
reverse-proxy_1  |   "status": "valid",
reverse-proxy_1  |   "expires": "2020-07-02T13:38:21Z",
reverse-proxy_1  |   "identifiers": [
reverse-proxy_1  |     {
reverse-proxy_1  |       "type": "dns",
reverse-proxy_1  |       "value": "13aafdb933cb.ngrok.io"
reverse-proxy_1  |     }
reverse-proxy_1  |   ],
reverse-proxy_1  |   "authorizations": [
reverse-proxy_1  |     "https://acme-v02.api.letsencrypt.org/acme/authz-v3/5470452047"
reverse-proxy_1  |   ],
reverse-proxy_1  |   "finalize": "https://acme-v02.api.letsencrypt.org/acme/finalize/89732664/3920796694",
reverse-proxy_1  |   "certificate": "https://acme-v02.api.letsencrypt.org/acme/cert/04f9e2c0c235a81dae24565bbc3b342aa44b"
reverse-proxy_1  | }
reverse-proxy_1  | 2020/06/25 13:38:23 [debug] 23#0: *2459 [lua] client.lua:169: jws(): jws payload: {"protected":{"url":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/cert\/04f9e2c0c235a81dae24565bbc3b342aa44b","kid":"https:\/\/acme-v02.api.letsencrypt.org\/acme\/acct\/89732664","alg":"RS256","nonce":"0101e3UUxOo4H89FjJJ-Ph8l5khVHmENFM7uWx_eGdk84Fk"}}
reverse-proxy_1  | 2020/06/25 13:38:24 [debug] 23#0: *2459 [lua] client.lua:215: post(): acme request: https://acme-v02.api.letsencrypt.org/acme/cert/04f9e2c0c235a81dae24565bbc3b342aa44b response: -----BEGIN CERTIFICATE-----
reverse-proxy_1  | -----END CERTIFICATE-----
reverse-proxy_1  |
reverse-proxy_1  | -----BEGIN CERTIFICATE-----
reverse-proxy_1  | -----END CERTIFICA
reverse-proxy_1  | 2020/06/25 13:38:24 [debug] 23#0: *2459 [lua] client.lua:436: order_certificate(): order is completed: https://acme-v02.api.letsencrypt.org/acme/order/89732664/3920796694
alexandruhog commented 4 years ago

@fffonion and lastly, I ran the MONITOR command, but in vain, nothing shown up when generating a new certificate. Again, as far as I am concerned, the kong plugin does not manage to communicate properly with the redis database. Either there are errors which are not shown (which would be strange given the fact that everything is logged), or there is a corner cased which was overseen.

fffonion commented 4 years ago

ah, i now see why now... i didn't notice it as well, you will need to specify the storage type in config as well, otherwise it's default to shm:

plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      domains:
        - 7a314d2cb9d6.ngrok.io
      tos_accepted: true
      storage: redis
      storage_config:
        redis:
          host: redis
          port: 6379
          database: 0
          auth: 'null'
alexandruhog commented 4 years ago

@fffonion u got to be kidding me lol =))

alexandruhog commented 4 years ago

@fffonion ok, this time around i got this:

reverse-proxy_1  | 2020/06/25 15:30:02 [warn] 23#0: *474 [kong] handler.lua:95 [acme] can't load cert and key from storage: failed to get from node cache: authentication failed ERR AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?, context: ssl_certificate_by_lua*, client: 172.30.0.1, server: 0.0.0.0:8443

with config file:


services:
  - name: mock-service
    url: http://mockbin.org
    routes:
      - name: mock-route
        hosts:
          - e9c92b926ab3.ngrok.io
        paths:
          - /

plugins:
  - name: acme
    config:
      account_email: mail@dummy.com
      domains:
        - e9c92b926ab3.ngrok.io
      tos_accepted: true
      storage: redis
      storage_config:
        redis:
          host: redis
          port: 6379
          database: 0
          auth: "null"
fffonion commented 4 years ago

I guess you will need to make auth null not "null" or just remove it

storage_config:
  redis:
    host: redis
    port: 6379
    database: 0
alexandruhog commented 4 years ago

Finally, I can confirm it works :) Thank you!