Closed jeremyjpj0916 closed 6 years ago
Ha, funny enough, I just prepared a note to run out migration tests against a 0.13.1 -> 0.14.0rc1 update... Thanks for reporting!
Could you try using a different Cassandra load balancing policy maybe, at least just for the migration step? Worth a try.
Will do, hammering away tonight seeing how far I can go with this :) . Very well could be I am populating extra fields and such your cassandra-lua may not play nice with considering I do only include one C* point of entry yet am mentioning a policy thats based on the full cluster multi dc, was glancing at your code trying to see if the problem was something in this section:
local function next_peer(state, i)
i = i + 1
if state.local_tried < #state.local_peers then
state.local_tried = state.local_tried + 1
state.local_idx = state.local_idx + 1
return i, state.local_peers[(state.local_idx % #state.local_peers) + 1]
elseif state.remote_tried < #state.remote_peers then
state.remote_tried = state.remote_tried + 1
state.remote_idx = state.remote_idx + 1
return i, state.remote_peers[(state.remote_idx % #state.remote_peers) + 1]
end
end
function _M:iter()
self.local_tried = 0
self.remote_tried = 0
self.local_idx = (self.start_local_idx % #self.local_peers) + 1
self.remote_idx = (self.start_remote_idx % #self.remote_peers) + 1
self.start_remote_idx = self.start_remote_idx + 1
self.start_local_idx = self.start_local_idx + 1
return next_peer, self, 0
end
But on a more positive note it does seem to have done some things based on logs π
2018/06/20 06:02:32 [info] migrating core for keyspace kong_dev
2018/06/20 06:02:32 [info] core migrated up to: 2018-03-22-141700_create_new_ssl_tables
2018/06/20 06:02:32 [info] core migrated up to: 2018-03-26-234600_copy_records_to_new_ssl_tables
2018/06/20 06:02:33 [info] core migrated up to: 2018-03-27-002500_drop_old_ssl_tables
2018/06/20 06:02:33 [info] core migrated up to: 2018-03-16-160000_index_consumers
2018/06/20 06:02:33 [info] core migrated up to: 2018-05-17-173100_hash_on_cookie
2018/06/20 06:02:33 [info] migrating jwt for keyspace kong_dev
So maybe it did the things I need at least to get schema on the 14rc1 grind.
Edit - well actually seems to have also stopped on the jwt changes(Maybe the added expires piece?) Will try your suggestion tho.
Went with basic round robin attempt here now, still #rekt heh. Will drop some logs though:
2018/06/20 06:34:46 [verbose] Kong: 0.14.0rc1
2018/06/20 06:34:46 [debug] ngx_lua: 10013
2018/06/20 06:34:46 [debug] nginx: 1013006
2018/06/20 06:34:46 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/20 06:34:46 [verbose] no config file found at /etc/kong/kong.conf
2018/06/20 06:34:46 [verbose] no config file found at /etc/kong.conf
2018/06/20 06:34:46 [verbose] no config file, skipping loading
2018/06/20 06:34:46 [debug] reading environment variables
2018/06/20 06:34:46 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/20 06:34:46 [debug] KONG_PG_USER ENV found with ""
2018/06/20 06:34:46 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/20 06:34:46 [debug] KONG_PG_HOST ENV found with ""
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/20 06:34:46 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/20 06:34:46 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with ""
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "LOCAL_QUORUM"
2018/06/20 06:34:46 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/20 06:34:46 [debug] KONG_PG_SSL ENV found with "off"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "SimpleStrategy"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "60000"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "RoundRobin"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "10000"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with ""
2018/06/20 06:34:46 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/20 06:34:46 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/20 06:34:46 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/20 06:34:46 [debug] KONG_PG_PORT ENV found with ""
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/20 06:34:46 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev"
2018/06/20 06:34:46 [debug] admin_access_log = "logs/admin_access.log"
2018/06/20 06:34:46 [debug] admin_error_log = "logs/error.log"
2018/06/20 06:34:46 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/20 06:34:46 [debug] anonymous_reports = true
2018/06/20 06:34:46 [debug] cassandra_consistency = "LOCAL_QUORUM"
2018/06/20 06:34:46 [debug] cassandra_contact_points = {"server00471"}
2018/06/20 06:34:46 [debug] cassandra_data_centers = {}
2018/06/20 06:34:46 [debug] cassandra_keyspace = "kong_dev"
2018/06/20 06:34:46 [debug] cassandra_lb_policy = "RoundRobin"
2018/06/20 06:34:46 [debug] cassandra_password = "******"
2018/06/20 06:34:46 [debug] cassandra_port = 9042
2018/06/20 06:34:46 [debug] cassandra_repl_factor = 2
2018/06/20 06:34:46 [debug] cassandra_repl_strategy = "SimpleStrategy"
2018/06/20 06:34:46 [debug] cassandra_schema_consensus_timeout = 60000
2018/06/20 06:34:46 [debug] cassandra_ssl = true
2018/06/20 06:34:46 [debug] cassandra_ssl_verify = true
2018/06/20 06:34:46 [debug] cassandra_timeout = 10000
2018/06/20 06:34:46 [debug] cassandra_username = "*****"
2018/06/20 06:34:46 [debug] client_body_buffer_size = "8k"
2018/06/20 06:34:46 [debug] client_max_body_size = "0"
2018/06/20 06:34:46 [debug] client_ssl = false
2018/06/20 06:34:46 [debug] custom_plugins = {}
2018/06/20 06:34:46 [debug] database = "cassandra"
2018/06/20 06:34:46 [debug] db_cache_ttl = 0
2018/06/20 06:34:46 [debug] db_update_frequency = 5
2018/06/20 06:34:46 [debug] db_update_propagation = 0
2018/06/20 06:34:46 [debug] dns_error_ttl = 1
2018/06/20 06:34:46 [debug] dns_hostsfile = "/etc/hosts"
2018/06/20 06:34:46 [debug] dns_no_sync = false
2018/06/20 06:34:46 [debug] dns_not_found_ttl = 30
2018/06/20 06:34:46 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/20 06:34:46 [debug] dns_resolver = {}
2018/06/20 06:34:46 [debug] dns_stale_ttl = 4
2018/06/20 06:34:46 [debug] error_default_type = "text/plain"
2018/06/20 06:34:46 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/20 06:34:46 [debug] log_level = "notice"
2018/06/20 06:34:46 [debug] lua_package_cpath = ""
2018/06/20 06:34:46 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/20 06:34:46 [debug] lua_socket_pool_size = 30
2018/06/20 06:34:46 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/20 06:34:46 [debug] lua_ssl_verify_depth = 3
2018/06/20 06:34:46 [debug] mem_cache_size = "128m"
2018/06/20 06:34:46 [debug] nginx_admin_directives = {}
2018/06/20 06:34:46 [debug] nginx_daemon = "off"
2018/06/20 06:34:46 [debug] nginx_http_directives = {}
2018/06/20 06:34:46 [debug] nginx_optimizations = true
2018/06/20 06:34:46 [debug] nginx_proxy_directives = {}
2018/06/20 06:34:46 [debug] nginx_user = "nobody nobody"
2018/06/20 06:34:46 [debug] nginx_worker_processes = "auto"
2018/06/20 06:34:46 [debug] pg_ssl = false
2018/06/20 06:34:46 [debug] pg_ssl_verify = false
2018/06/20 06:34:46 [debug] plugins = {"bundled"}
2018/06/20 06:34:46 [debug] prefix = "/usr/local/kong/"
2018/06/20 06:34:46 [debug] proxy_access_log = "logs/access.log"
2018/06/20 06:34:46 [debug] proxy_error_log = "logs/error.log"
2018/06/20 06:34:46 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/20 06:34:46 [debug] real_ip_header = "X-Real-IP"
2018/06/20 06:34:46 [debug] real_ip_recursive = "off"
2018/06/20 06:34:46 [debug] ssl_cipher_suite = "modern"
2018/06/20 06:34:46 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/20 06:34:46 [debug] trusted_ips = {}
2018/06/20 06:34:46 [debug] upstream_keepalive = 60
2018/06/20 06:34:46 [verbose] prefix in use: /usr/local/kong
2018/06/20 06:34:46 [verbose] running datastore migrations
Error:
/usr/local/share/lua/5.1/resty/cassandra/policies/lb/rr.lua:38: attempt to get length of field 'peers' (a nil value)
stack traceback:
/usr/local/share/lua/5.1/resty/cassandra/policies/lb/rr.lua:38: in function 'iter'
/usr/local/share/lua/5.1/resty/cassandra/cluster.lua:422: in function 'next_coordinator'
...share/lua/5.1/kong/db/strategies/cassandra/connector.lua:180: in function 'query'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:83: in function 'is_partitioned'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:122: in function 'build_queries'
2018/06/20 06:34:46 [info] migrating jwt for keyspace kong_dev
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:197: in function 'get_query'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:587: in function 'select'
/usr/local/share/lua/5.1/kong/db/dao/init.lua:236: in function 'select'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:409: in function 'check_foreign_key_in_new_db'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:438: in function 'check_foreign_constraints'
...
/usr/local/share/lua/5.1/kong/dao/factory.lua:417: in function 'migrate'
/usr/local/share/lua/5.1/kong/dao/factory.lua:535: in function 'run_migrations'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:35: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:87>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:44>
/usr/local/bin/kong:7: in function 'file_gen'
init_worker_by_lua:48: in function <init_worker_by_lua:46>
[C]: in function 'xpcall'
init_worker_by_lua:55: in function <init_worker_by_lua:53>
It switched to just the rr policy codebase based on the lua file I see for error out at least. Interestingly enough our Dev Kong nodes hitting this cluster on v 0.13.1 still seem functional (for now π )
Just to update, once the migrations are in place and working for Kong upgrades to release candidate from v13.x ill update my "docker-kong" to the latest checksum and give the migrations another go and start validating out all the newer admin api functionalities that have not worked in prior versions.
Hi @jeremyjpj0916,
I have researched this a bit. For me it feels more like a connectivity issue or compatibility issue on Cassandra connection or perhaps something to do with:
cassandra_ssl = true
cassandra_ssl_verify = true
Can you connect that Cassandra cluster from the machine that you have spinned up?
@bungle I am running a 6 node cluster, 3 in each DC. I will do some tests of starting Kong 13.x up pointing to just 1 of the 3 nodes in the DC I am attempting migrations from, and will validate all hosts can indeed connect and run on Kong 13.x and post some debug logs here showing connectivity to each individual Cassandra host mentioned. Another way I think I can prove it is not that would be by running the migrations Job I have created and let it create fresh brand new keyspace I will direct it to and see if that works first time (which ultimately won't work for users because we have proxies to carry over, but it at least proves the 14.x migrations works for standing up proper schema dump for itself from scratch).
Edit - Another reason I do not believe that to be the case would be just by following the logs from my first run against the same host as well, you can see where it had some partial successes.
First evidence of being able to launch Kong 0.13.1 on the C* Host in question(server00471) the migrations was run from. For this test I isolated my Kong start to point to just that host as well as set log level to debug for nginx startup
2018/06/22 15:40:34 [debug] 14#0: [lua] globalpatches.lua:9: installing the globalpatches
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:431: init(): [dns-client] (re)configuring dns client
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:435: init(): [dns-client] staleTtl = 4
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:438: init(): [dns-client] noSynchronisation = false
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:457: init(): [dns-client] query order = LAST, SRV, A, CNAME
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: kong-308-46jz2 = 10.129.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:545: init(): [dns-client] nameserver 10.106.188.195
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:550: init(): [dns-client] attempts = 5
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:559: init(): [dns-client] timeout = 2000 ms
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:563: init(): [dns-client] ndots = 5
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:565: init(): [dns-client] search = kong-gateway-dev.svc.cluster.local, svc.cluster.local, cluster.local, company.com
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:571: init(): [dns-client] badTtl = 30 s
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:573: init(): [dns-client] emptyTtl = 1 s
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:431: init(): [dns-client] (re)configuring dns client
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:435: init(): [dns-client] staleTtl = 4
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:438: init(): [dns-client] noSynchronisation = false
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:457: init(): [dns-client] query order = LAST, SRV, A, CNAME
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: kong-308-46jz2 = 10.129.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-mcastprefix = [fe00::0]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localnet = [fe00::0]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:489: init(): [dns-client] adding A-record from 'hosts' file: localhost = 127.0.0.1
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: localhost = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-localhost = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-loopback = [::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allnodes = [fe00::1]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:504: init(): [dns-client] adding AAAA-record from 'hosts' file: ip6-allrouters = [fe00::2]
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:545: init(): [dns-client] nameserver 10.106.188.195
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:550: init(): [dns-client] attempts = 5
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:559: init(): [dns-client] timeout = 2000 ms
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:563: init(): [dns-client] ndots = 5
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:565: init(): [dns-client] search = kong-gateway-dev.svc.cluster.local, svc.cluster.local, cluster.local, company.com
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:571: init(): [dns-client] badTtl = 30 s
2018/06/22 15:40:34 [debug] 14#0: [lua] client.lua:573: init(): [dns-client] emptyTtl = 1 s
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:81: load_plugins(): Discovering used plugins
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM plugins on host 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-spec-expose
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: correlation-id
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-splunk-log
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: cors
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-oidc-multi-idp
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: loggly
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: hmac-auth
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-upstream-jwt
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: bot-detection
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: request-transformer
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: oauth2
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: response-transformer
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: ip-restriction
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: statsd
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: jwt
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: basic-auth
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: key-auth
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: http-log
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: datadog
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: tcp-log
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-oidc-auth
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: kong-path-based-routing
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: rate-limiting
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: acl
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: runscope
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: syslog
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: response-ratelimiting
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: udp-log
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: file-log
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: aws-lambda
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: request-size-limiting
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: ldap-auth
2018/06/22 15:40:34 [debug] 14#0: [lua] init.lua:113: load_plugins(): Loading plugin: request-termination
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at 10.87.**.***
2018/06/22 15:40:34 [debug] 14#0: [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM apis on host 10.87.**.***
2018/06/22 15:40:34 [notice] 14#0: using the "epoll" event method
2018/06/22 15:40:34 [notice] 14#0: openresty/1.13.6.1
2018/06/22 15:40:34 [notice] 14#0: built by gcc 6.3.0 (Alpine 6.3.0)
2018/06/22 15:40:34 [notice] 14#0: OS: Linux 3.10.0-693.el7.x86_64
2018/06/22 15:40:34 [notice] 14#0: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2018/06/22 15:40:34 [notice] 14#0: start worker processes
2018/06/22 15:40:34 [notice] 14#0: start worker process 27
2018/06/22 15:40:34 [notice] 14#0: start worker process 28
2018/06/22 15:40:34 [notice] 14#0: start worker process 29
2018/06/22 15:40:34 [debug] 27#0: *1 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 27#0: *1 [lua] globalpatches.lua:249: randomseed(): random seed: 230591651615 for worker nb 0
2018/06/22 15:40:34 [notice] 14#0: start worker process 30
2018/06/22 15:40:34 [debug] 28#0: *2 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 28#0: *2 [lua] globalpatches.lua:249: randomseed(): random seed: 372371117197 for worker nb 1
2018/06/22 15:40:34 [notice] 14#0: start worker process 31
2018/06/22 15:40:34 [debug] 29#0: *3 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 29#0: *3 [lua] globalpatches.lua:249: randomseed(): random seed: 822271152362 for worker nb 2
2018/06/22 15:40:34 [notice] 14#0: start worker process 32
2018/06/22 15:40:34 [debug] 30#0: *4 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 30#0: *4 [lua] globalpatches.lua:249: randomseed(): random seed: 379722921376 for worker nb 3
2018/06/22 15:40:34 [notice] 14#0: start worker process 33
2018/06/22 15:40:34 [debug] 31#0: *5 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 31#0: *5 [lua] globalpatches.lua:249: randomseed(): random seed: 226230215822 for worker nb 4
2018/06/22 15:40:34 [debug] 32#0: *6 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 32#0: *6 [lua] globalpatches.lua:249: randomseed(): random seed: 102123761715 for worker nb 5
2018/06/22 15:40:34 [notice] 14#0: start worker process 34
2018/06/22 15:40:34 [debug] 33#0: *7 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 33#0: *7 [lua] globalpatches.lua:249: randomseed(): random seed: 100238116718 for worker nb 6
2018/06/22 15:40:34 [debug] 34#0: *8 [lua] globalpatches.lua:223: randomseed(): seeding PRNG from OpenSSL RAND_bytes()
2018/06/22 15:40:34 [debug] 34#0: *8 [lua] globalpatches.lua:249: randomseed(): random seed: 891949216513 for worker nb 7
2018/06/22 15:40:34 [debug] 27#0: *1 [lua] base_plugin.lua:12: init_worker(): executing plugin "bot-detection": init_worker
... And a million other init_worker plugin initializations.... etc.
2018/06/22 15:40:40 [debug] 28#0: *12 [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:40 [debug] 28#0: *12 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM cluster_events
WHERE channel IN ?
AND at > ?
AND at <= ?
on host server00471
2018/06/22 15:40:40 [debug] 27#0: *9 [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00471
2018/06/22 15:40:40 [debug] 27#0: *9 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM upstreams on host server00471
2018/06/22 15:40:40 [debug] 27#0: *9 [lua] balancer.lua:700: init(): initialized 0 balancer(s), 0 error(s)
The pod is still continually running just fine in nginx debug mode against this node and able to take proxy traffic. For my next test I will see what running a Kong migrations against this node on a fresh non existant keyspace yields.
Edit - so I did say I would mention the other 2 hosts being able to be isolated and have kong connect themselves as well
Other host 2:
2018/06/22 15:53:05 [debug] 28#0: *12 [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00470
2018/06/22 15:53:05 [debug] 28#0: *12 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM cluster_events
WHERE channel IN ?
AND at > ?
AND at <= ?
on host server00470
2018/06/22 15:53:05 [debug] 29#0: *9 [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00470
2018/06/22 15:53:05 [debug] 29#0: *9 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM upstreams on host server00470
2018/06/22 15:53:05 [debug] 29#0: *9 [lua] balancer.lua:700: init(): initialized 0 balancer(s), 0 error(s)
2018/06/22 15:53:05 [debug] 28#0: *10 [lua] balancer.lua:700: init(): initialized 0 balancer(s), 0 error(s)
2018/06/22 15:53:05 [debug] 30#0: *22 [lua] balancer.lua:700: init(): initialized 0 balancer(s), 0 error(s)
Other host 3:
2018/06/22 16:00:30 [debug] 27#0: *11 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM cluster_events
WHERE channel IN ?
AND at > ?
AND at <= ?
on host 10.87.**.***
2018/06/22 16:00:30 [debug] 27#0: *9 [lua] cluster.lua:428: next_coordinator(): [lua-cassandra] load balancing policy chose host at server00467
2018/06/22 16:00:30 [debug] 27#0: *9 [lua] cluster.lua:615: prepare(): [lua-cassandra] preparing SELECT * FROM upstreams on host server00467
2018/06/22 16:00:30 [debug] 27#0: *9 [lua] balancer.lua:700: init(): initialized 0 balancer(s), 0 error(s)
So all 3 local DC hosts are all working fine. I think this is verifiable proof of connectivity.
So in your honor Mr.@bungle I decided to name a new key-space after you. So my next test was can 0.14.0rc1 properly handle the migrations of a non-existant keyspace on this same host? Looks like the release candidate does in fact do the needful for standing that stuff up π. So seems to me still the problem lies in a 0.13.x cluster db keyspace with actual data having trouble migrating over. Maybe I can export the C* db in our dev env(its not much data, but I need to review it for private company references and such). and then you will have an exact replica to play with.
2018/06/22 16:28:20 [verbose] Kong: 0.14.0rc1
2018/06/22 16:28:20 [debug] ngx_lua: 10013
2018/06/22 16:28:20 [debug] nginx: 1013006
2018/06/22 16:28:20 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/22 16:28:20 [verbose] no config file found at /etc/kong/kong.conf
2018/06/22 16:28:20 [verbose] no config file found at /etc/kong.conf
2018/06/22 16:28:20 [verbose] no config file, skipping loading
2018/06/22 16:28:20 [debug] reading environment variables
2018/06/22 16:28:20 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/22 16:28:20 [debug] KONG_PG_USER ENV found with ""
2018/06/22 16:28:20 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/22 16:28:20 [debug] KONG_PG_HOST ENV found with ""
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/22 16:28:20 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/22 16:28:20 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC1"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "ONE"
2018/06/22 16:28:20 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/22 16:28:20 [debug] KONG_PG_SSL ENV found with "off"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "SimpleStrategy"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "120000"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "RoundRobin"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "10000"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC1:2,DC2:2"
2018/06/22 16:28:20 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/22 16:28:20 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/22 16:28:20 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/22 16:28:20 [debug] KONG_PG_PORT ENV found with ""
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/22 16:28:20 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "bungle_keyspace"
2018/06/22 16:28:20 [debug] admin_access_log = "logs/admin_access.log"
2018/06/22 16:28:20 [debug] admin_error_log = "logs/error.log"
2018/06/22 16:28:20 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/22 16:28:20 [debug] anonymous_reports = true
2018/06/22 16:28:20 [debug] cassandra_consistency = "ONE"
2018/06/22 16:28:20 [debug] cassandra_contact_points = {"server00471"}
2018/06/22 16:28:20 [debug] cassandra_data_centers = {"DC1:2","DC2:2"}
2018/06/22 16:28:20 [debug] cassandra_keyspace = "bungle_keyspace"
2018/06/22 16:28:20 [debug] cassandra_lb_policy = "RoundRobin"
2018/06/22 16:28:20 [debug] cassandra_local_datacenter = "DC1"
2018/06/22 16:28:20 [debug] cassandra_password = "******"
2018/06/22 16:28:20 [debug] cassandra_port = 9042
2018/06/22 16:28:20 [debug] cassandra_repl_factor = 2
2018/06/22 16:28:20 [debug] cassandra_repl_strategy = "SimpleStrategy"
2018/06/22 16:28:20 [debug] cassandra_schema_consensus_timeout = 120000
2018/06/22 16:28:20 [debug] cassandra_ssl = true
2018/06/22 16:28:20 [debug] cassandra_ssl_verify = true
2018/06/22 16:28:20 [debug] cassandra_timeout = 10000
2018/06/22 16:28:20 [debug] cassandra_username = "*****"
2018/06/22 16:28:20 [debug] client_body_buffer_size = "8k"
2018/06/22 16:28:20 [debug] client_max_body_size = "0"
2018/06/22 16:28:20 [debug] client_ssl = false
2018/06/22 16:28:20 [debug] custom_plugins = {}
2018/06/22 16:28:20 [debug] database = "cassandra"
2018/06/22 16:28:20 [debug] db_cache_ttl = 0
2018/06/22 16:28:20 [debug] db_update_frequency = 5
2018/06/22 16:28:20 [debug] db_update_propagation = 0
2018/06/22 16:28:20 [debug] dns_error_ttl = 1
2018/06/22 16:28:20 [debug] dns_hostsfile = "/etc/hosts"
2018/06/22 16:28:20 [debug] dns_no_sync = false
2018/06/22 16:28:20 [debug] dns_not_found_ttl = 30
2018/06/22 16:28:20 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/22 16:28:20 [debug] dns_resolver = {}
2018/06/22 16:28:20 [debug] dns_stale_ttl = 4
2018/06/22 16:28:20 [debug] error_default_type = "text/plain"
2018/06/22 16:28:20 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/22 16:28:20 [debug] log_level = "notice"
2018/06/22 16:28:20 [debug] lua_package_cpath = ""
2018/06/22 16:28:20 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/22 16:28:20 [debug] lua_socket_pool_size = 30
2018/06/22 16:28:20 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/22 16:28:20 [debug] lua_ssl_verify_depth = 3
2018/06/22 16:28:20 [debug] mem_cache_size = "128m"
2018/06/22 16:28:20 [debug] nginx_admin_directives = {}
2018/06/22 16:28:20 [debug] nginx_daemon = "off"
2018/06/22 16:28:20 [debug] nginx_http_directives = {}
2018/06/22 16:28:20 [debug] nginx_optimizations = true
2018/06/22 16:28:20 [debug] nginx_proxy_directives = {}
2018/06/22 16:28:20 [debug] nginx_user = "nobody nobody"
2018/06/22 16:28:20 [debug] nginx_worker_processes = "auto"
2018/06/22 16:28:20 [debug] pg_ssl = false
2018/06/22 16:28:20 [debug] pg_ssl_verify = false
2018/06/22 16:28:20 [debug] plugins = {"bundled"}
2018/06/22 16:28:20 [debug] prefix = "/usr/local/kong/"
2018/06/22 16:28:20 [debug] proxy_access_log = "logs/access.log"
2018/06/22 16:28:20 [debug] proxy_error_log = "logs/error.log"
2018/06/22 16:28:20 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/22 16:28:20 [debug] real_ip_header = "X-Real-IP"
2018/06/22 16:28:20 [debug] real_ip_recursive = "off"
2018/06/22 16:28:20 [debug] ssl_cipher_suite = "modern"
2018/06/22 16:28:20 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/22 16:28:20 [debug] trusted_ips = {}
2018/06/22 16:28:20 [debug] upstream_keepalive = 60
2018/06/22 16:28:20 [verbose] prefix in use: /usr/local/kong
2018/06/22 16:28:20 [verbose] running datastore migrations
2018/06/22 16:28:20 [info] migrating core for keyspace bungle_keyspace
2018/06/22 16:28:20 [info] could not switch to bungle_keyspace keyspace ([Invalid] Keyspace 'bungle_keyspace' does not exist), attempting to create it
2018/06/22 16:28:20 [info] successfully created bungle_keyspace keyspace
2018/06/22 16:28:21 [info] core migrated up to: 2015-01-12-175310_skeleton
2018/06/22 16:28:22 [info] core migrated up to: 2015-01-12-175310_init_schema
2018/06/22 16:28:22 [info] core migrated up to: 2015-11-23-817313_nodes
2018/06/22 16:28:24 [info] core migrated up to: 2016-02-25-160900_remove_null_consumer_id
2018/06/22 16:28:24 [info] core migrated up to: 2016-02-29-121813_remove_ttls
2018/06/22 16:28:25 [info] core migrated up to: 2016-09-05-212515_retries_step_1
2018/06/22 16:28:26 [info] core migrated up to: 2016-09-05-212515_retries_step_2
2018/06/22 16:28:27 [info] core migrated up to: 2016-09-16-141423_upstreams
2018/06/22 16:28:27 [info] core migrated up to: 2016-12-14-172100_move_ssl_certs_to_core
2018/06/22 16:28:27 [info] core migrated up to: 2016-11-11-151900_new_apis_router_1
2018/06/22 16:28:28 [info] core migrated up to: 2016-11-11-151900_new_apis_router_2
2018/06/22 16:28:29 [info] core migrated up to: 2016-11-11-151900_new_apis_router_3
2018/06/22 16:28:29 [info] core migrated up to: 2017-01-24-132600_upstream_timeouts
2018/06/22 16:28:30 [info] core migrated up to: 2017-01-24-132600_upstream_timeouts_2
2018/06/22 16:28:31 [info] core migrated up to: 2017-03-27-132300_anonymous
2018/06/22 16:28:31 [info] core migrated up to: 2017-04-04-145100_cluster_events
2018/06/22 16:28:31 [info] core migrated up to: 2017-05-19-173100_remove_nodes_table
2018/06/22 16:28:31 [info] core migrated up to: 2017-07-28-225000_balancer_orderlist_remove
2018/06/22 16:28:31 [info] core migrated up to: 2017-11-07-192000_upstream_healthchecks
2018/06/22 16:28:32 [info] core migrated up to: 2017-10-27-134100_consistent_hashing_1
2018/06/22 16:28:34 [info] core migrated up to: 2017-11-07-192100_upstream_healthchecks_2
2018/06/22 16:28:34 [info] core migrated up to: 2017-10-27-134100_consistent_hashing_2
2018/06/22 16:28:35 [info] core migrated up to: 2017-09-14-140200_routes_and_services
2018/06/22 16:28:35 [info] core migrated up to: 2017-10-25-180700_plugins_routes_and_services
2018/06/22 16:28:35 [info] core migrated up to: 2018-02-23-142400_targets_add_index
2018/06/22 16:28:35 [info] core migrated up to: 2018-03-22-141700_create_new_ssl_tables
2018/06/22 16:28:35 [info] core migrated up to: 2018-03-26-234600_copy_records_to_new_ssl_tables
2018/06/22 16:28:36 [info] core migrated up to: 2018-03-27-002500_drop_old_ssl_tables
2018/06/22 16:28:36 [info] core migrated up to: 2018-03-16-160000_index_consumers
2018/06/22 16:28:36 [info] core migrated up to: 2018-05-17-173100_hash_on_cookie
2018/06/22 16:28:36 [info] migrating response-transformer for keyspace bungle_keyspace
2018/06/22 16:28:38 [info] response-transformer migrated up to: 2016-03-10-160000_resp_trans_schema_changes
2018/06/22 16:28:38 [info] migrating ip-restriction for keyspace bungle_keyspace
2018/06/22 16:28:38 [info] ip-restriction migrated up to: 2016-05-24-remove-cache
2018/06/22 16:28:38 [info] migrating statsd for keyspace bungle_keyspace
2018/06/22 16:28:38 [info] statsd migrated up to: 2017-06-09-160000_statsd_schema_changes
2018/06/22 16:28:38 [info] migrating jwt for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] jwt migrated up to: 2015-06-09-jwt-auth
2018/06/22 16:28:39 [info] jwt migrated up to: 2016-03-07-jwt-alg
2018/06/22 16:28:39 [info] jwt migrated up to: 2017-07-31-120200_jwt-auth_preflight_default
2018/06/22 16:28:39 [info] jwt migrated up to: 2017-10-25-211200_jwt_cookie_names_default
2018/06/22 16:28:39 [info] jwt migrated up to: 2018-03-15-150000_jwt_maximum_expiration
2018/06/22 16:28:39 [info] migrating cors for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] cors migrated up to: 2017-03-14_multiple_orgins
2018/06/22 16:28:39 [info] migrating basic-auth for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] basic-auth migrated up to: 2015-08-03-132400_init_basicauth
2018/06/22 16:28:39 [info] migrating key-auth for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] key-auth migrated up to: 2015-07-31-172400_init_keyauth
2018/06/22 16:28:39 [info] key-auth migrated up to: 2017-07-31-120200_key-auth_preflight_default
2018/06/22 16:28:39 [info] migrating ldap-auth for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] ldap-auth migrated up to: 2017-10-23-150900_header_type_default
2018/06/22 16:28:39 [info] migrating hmac-auth for keyspace bungle_keyspace
2018/06/22 16:28:39 [info] hmac-auth migrated up to: 2015-09-16-132400_init_hmacauth
2018/06/22 16:28:41 [info] hmac-auth migrated up to: 2017-06-21-132400_init_hmacauth
2018/06/22 16:28:41 [info] migrating datadog for keyspace bungle_keyspace
2018/06/22 16:28:42 [info] datadog migrated up to: 2017-06-09-160000_datadog_schema_changes
2018/06/22 16:28:42 [info] migrating tcp-log for keyspace bungle_keyspace
2018/06/22 16:28:42 [info] tcp-log migrated up to: 2017-12-13-120000_tcp-log_tls
2018/06/22 16:28:42 [info] migrating acl for keyspace bungle_keyspace
2018/06/22 16:28:42 [info] acl migrated up to: 2015-08-25-841841_init_acl
2018/06/22 16:28:42 [info] migrating response-ratelimiting for keyspace bungle_keyspace
2018/06/22 16:28:42 [info] response-ratelimiting migrated up to: 2015-08-21_init_response-rate-limiting
2018/06/22 16:28:43 [info] response-ratelimiting migrated up to: 2016-08-04-321512_response-rate-limiting_policies
2018/06/22 16:28:44 [info] response-ratelimiting migrated up to: 2017-12-19-120000_add_route_and_service_id_to_response_ratelimiting
2018/06/22 16:28:44 [info] migrating request-transformer for keyspace bungle_keyspace
2018/06/22 16:28:45 [info] request-transformer migrated up to: 2016-03-10-160000_req_trans_schema_changes
2018/06/22 16:28:45 [info] migrating rate-limiting for keyspace bungle_keyspace
2018/06/22 16:28:46 [info] rate-limiting migrated up to: 2015-08-03-132400_init_ratelimiting
2018/06/22 16:28:47 [info] rate-limiting migrated up to: 2016-07-25-471385_ratelimiting_policies
2018/06/22 16:28:48 [info] rate-limiting migrated up to: 2017-11-30-120000_add_route_and_service_id
2018/06/22 16:28:48 [info] migrating oauth2 for keyspace bungle_keyspace
2018/06/22 16:28:48 [info] oauth2 migrated up to: 2015-08-03-132400_init_oauth2
2018/06/22 16:28:48 [info] oauth2 migrated up to: 2015-08-24-215800_cascade_delete_index
2018/06/22 16:28:49 [info] oauth2 migrated up to: 2016-02-29-435612_remove_ttl
2018/06/22 16:28:51 [info] oauth2 migrated up to: 2016-04-14-283949_serialize_redirect_uri
2018/06/22 16:28:52 [info] oauth2 migrated up to: 2016-07-15-oauth2_code_credential_id
2018/06/22 16:28:52 [info] oauth2 migrated up to: 2016-09-19-oauth2_code_index
2018/06/22 16:28:52 [info] oauth2 migrated up to: 2016-09-19-oauth2_api_id
2018/06/22 16:28:53 [info] oauth2 migrated up to: 2016-12-15-set_global_credentials
2018/06/22 16:28:53 [info] oauth2 migrated up to: 2017-10-19-set_auth_header_name_default
2018/06/22 16:28:53 [info] oauth2 migrated up to: 2017-10-11-oauth2_new_refresh_token_ttl_config_value
2018/06/22 16:28:54 [info] oauth2 migrated up to: 2018-01-09-oauth2_c_add_service_id
2018/06/22 16:28:54 [info] 66 migrations ran
2018/06/22 16:28:54 [info] waiting for Cassandra schema consensus (120000ms timeout)...
2018/06/22 16:28:55 [info] Cassandra schema consensus: reached
2018/06/22 16:28:55 [verbose] migrations up to date
And lastly gave one more go on migrating against my existing 0.13.1 keyspace on the host to make sure its still failing, can confirm, still fails.
2018/06/22 16:53:34 [verbose] Kong: 0.14.0rc1
2018/06/22 16:53:34 [debug] ngx_lua: 10013
2018/06/22 16:53:34 [debug] nginx: 1013006
2018/06/22 16:53:34 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/22 16:53:34 [verbose] no config file found at /etc/kong/kong.conf
2018/06/22 16:53:34 [verbose] no config file found at /etc/kong.conf
2018/06/22 16:53:34 [verbose] no config file, skipping loading
2018/06/22 16:53:34 [debug] reading environment variables
2018/06/22 16:53:34 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/22 16:53:34 [debug] KONG_PG_USER ENV found with ""
2018/06/22 16:53:34 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/22 16:53:34 [debug] KONG_PG_HOST ENV found with ""
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/22 16:53:34 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/22 16:53:34 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC1"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "ONE"
2018/06/22 16:53:34 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/22 16:53:34 [debug] KONG_PG_SSL ENV found with "off"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "SimpleStrategy"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "120000"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "RoundRobin"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "10000"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC1:2,DC2:2"
2018/06/22 16:53:34 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/22 16:53:34 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/22 16:53:34 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/22 16:53:34 [debug] KONG_PG_PORT ENV found with ""
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/22 16:53:34 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev"
2018/06/22 16:53:34 [debug] admin_access_log = "logs/admin_access.log"
2018/06/22 16:53:34 [debug] admin_error_log = "logs/error.log"
2018/06/22 16:53:34 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/22 16:53:34 [debug] anonymous_reports = true
2018/06/22 16:53:34 [debug] cassandra_consistency = "ONE"
2018/06/22 16:53:34 [debug] cassandra_contact_points = {"server00471"}
2018/06/22 16:53:34 [debug] cassandra_data_centers = {"DC1:2","DC2:2"}
2018/06/22 16:53:34 [debug] cassandra_keyspace = "kong_dev"
2018/06/22 16:53:34 [debug] cassandra_lb_policy = "RoundRobin"
2018/06/22 16:53:34 [debug] cassandra_local_datacenter = "DC1"
2018/06/22 16:53:34 [debug] cassandra_password = "******"
2018/06/22 16:53:34 [debug] cassandra_port = 9042
2018/06/22 16:53:34 [debug] cassandra_repl_factor = 2
2018/06/22 16:53:34 [debug] cassandra_repl_strategy = "SimpleStrategy"
2018/06/22 16:53:34 [debug] cassandra_schema_consensus_timeout = 120000
2018/06/22 16:53:34 [debug] cassandra_ssl = true
2018/06/22 16:53:34 [debug] cassandra_ssl_verify = true
2018/06/22 16:53:34 [debug] cassandra_timeout = 10000
2018/06/22 16:53:34 [debug] cassandra_username = "*****"
2018/06/22 16:53:34 [debug] client_body_buffer_size = "8k"
2018/06/22 16:53:34 [debug] client_max_body_size = "0"
2018/06/22 16:53:34 [debug] client_ssl = false
2018/06/22 16:53:34 [debug] custom_plugins = {}
2018/06/22 16:53:34 [debug] database = "cassandra"
2018/06/22 16:53:34 [debug] db_cache_ttl = 0
2018/06/22 16:53:34 [debug] db_update_frequency = 5
2018/06/22 16:53:34 [debug] db_update_propagation = 0
2018/06/22 16:53:34 [debug] dns_error_ttl = 1
2018/06/22 16:53:34 [debug] dns_hostsfile = "/etc/hosts"
2018/06/22 16:53:34 [debug] dns_no_sync = false
2018/06/22 16:53:34 [debug] dns_not_found_ttl = 30
2018/06/22 16:53:34 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/22 16:53:34 [debug] dns_resolver = {}
2018/06/22 16:53:34 [debug] dns_stale_ttl = 4
2018/06/22 16:53:34 [debug] error_default_type = "text/plain"
2018/06/22 16:53:34 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/22 16:53:34 [debug] log_level = "notice"
2018/06/22 16:53:34 [debug] lua_package_cpath = ""
2018/06/22 16:53:34 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/22 16:53:34 [debug] lua_socket_pool_size = 30
2018/06/22 16:53:34 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/22 16:53:34 [debug] lua_ssl_verify_depth = 3
2018/06/22 16:53:34 [debug] mem_cache_size = "128m"
2018/06/22 16:53:34 [debug] nginx_admin_directives = {}
2018/06/22 16:53:34 [debug] nginx_daemon = "off"
2018/06/22 16:53:34 [debug] nginx_http_directives = {}
2018/06/22 16:53:34 [debug] nginx_optimizations = true
2018/06/22 16:53:34 [debug] nginx_proxy_directives = {}
2018/06/22 16:53:34 [debug] nginx_user = "nobody nobody"
2018/06/22 16:53:34 [debug] nginx_worker_processes = "auto"
2018/06/22 16:53:34 [debug] pg_ssl = false
2018/06/22 16:53:34 [debug] pg_ssl_verify = false
2018/06/22 16:53:34 [debug] plugins = {"bundled"}
2018/06/22 16:53:34 [debug] prefix = "/usr/local/kong/"
2018/06/22 16:53:34 [debug] proxy_access_log = "logs/access.log"
2018/06/22 16:53:34 [debug] proxy_error_log = "logs/error.log"
2018/06/22 16:53:34 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/22 16:53:34 [debug] real_ip_header = "X-Real-IP"
2018/06/22 16:53:34 [debug] real_ip_recursive = "off"
2018/06/22 16:53:34 [debug] ssl_cipher_suite = "modern"
2018/06/22 16:53:34 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/22 16:53:34 [debug] trusted_ips = {}
2018/06/22 16:53:34 [debug] upstream_keepalive = 60
2018/06/22 16:53:34 [verbose] prefix in use: /usr/local/kong
2018/06/22 16:53:34 [verbose] running datastore migrations
2018/06/22 16:53:35 [info] migrating jwt for keyspace kong_dev
Error:
/usr/local/share/lua/5.1/resty/cassandra/policies/lb/rr.lua:38: attempt to get length of field 'peers' (a nil value)
stack traceback:
/usr/local/share/lua/5.1/resty/cassandra/policies/lb/rr.lua:38: in function 'iter'
/usr/local/share/lua/5.1/resty/cassandra/cluster.lua:422: in function 'next_coordinator'
...share/lua/5.1/kong/db/strategies/cassandra/connector.lua:180: in function 'query'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:83: in function 'is_partitioned'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:122: in function 'build_queries'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:197: in function 'get_query'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:587: in function 'select'
/usr/local/share/lua/5.1/kong/db/dao/init.lua:236: in function 'select'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:409: in function 'check_foreign_key_in_new_db'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:438: in function 'check_foreign_constraints'
...
/usr/local/share/lua/5.1/kong/dao/factory.lua:417: in function 'migrate'
/usr/local/share/lua/5.1/kong/dao/factory.lua:535: in function 'run_migrations'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:35: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:87>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:44>
/usr/local/bin/kong:7: in function 'file_gen'
init_worker_by_lua:48: in function <init_worker_by_lua:46>
[C]: in function 'xpcall'
init_worker_by_lua:55: in function <init_worker_by_lua:53>
Info on the existing keyspace:
CREATE KEYSPACE kong_dev WITH replication = {'class': 'NetworkTopologyStrategy', 'DC1': '2', 'DC2': '2'} AND durable_writes = true;
Info on Cluster status too:
Datacenter: DC1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.***.**.*** 1.57 MiB 256 ? de44f0ea-9ac4-4bef-ac05-c92926e6fc89 RACK3
UN 10.***.**.*** 1.96 MiB 256 ? 42f16255-69c6-4aee-b39e-6f2c97513e56 RACK1
UN 10.***.**.*** 2 MiB 256 ? b62d5d30-0b88-4dd6-b1c1-4f800606a562 RACK4
Datacenter: DC2
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 10.***.**.*** 1.08 MiB 256 ? 121e6024-092f-426d-af0d-a4af99b6e16b RACK5
UN 10.***.**.*** 1.58 MiB 256 ? f340733a-8641-4ed7-b489-0b4174eba29b RACK6
UN 10.***.**.*** 1.99 MiB 256 ? 398341d8-b5f9-4004-b2ce-3a0322c30821 RACK2
Okay so I stood up another temp keyspace with minimal data created from the 0.13.1 Kong migrations initially and I was still able to reproduce the error upon trying to migrate it to 0.14.0rc1 as well. Posting logs again below and will attach the schema .cql before and after running the 0.14.0rc1 migration against it. New keyspace name was kong_dev_new just fyi. Logs here:
2018/06/25 04:30:17 [verbose] Kong: 0.14.0rc1
2018/06/25 04:30:17 [debug] ngx_lua: 10013
2018/06/25 04:30:17 [debug] nginx: 1013006
2018/06/25 04:30:17 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/25 04:30:17 [verbose] no config file found at /etc/kong/kong.conf
2018/06/25 04:30:17 [verbose] no config file found at /etc/kong.conf
2018/06/25 04:30:17 [verbose] no config file, skipping loading
2018/06/25 04:30:17 [debug] reading environment variables
2018/06/25 04:30:17 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_USERNAME ENV found with "******"
2018/06/25 04:30:17 [debug] KONG_PG_USER ENV found with ""
2018/06/25 04:30:17 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/25 04:30:17 [debug] KONG_PG_HOST ENV found with ""
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/25 04:30:17 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/25 04:30:17 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC2"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "LOCAL_QUORUM"
2018/06/25 04:30:17 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/25 04:30:17 [debug] KONG_PG_SSL ENV found with "off"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "NetworkTopologyStrategy"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "60000"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "DCAwareRoundRobin"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "10000"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC2:2,DC1:2"
2018/06/25 04:30:17 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/25 04:30:17 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/25 04:30:17 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/25 04:30:17 [debug] KONG_PG_PORT ENV found with ""
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/25 04:30:17 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev_new"
2018/06/25 04:30:17 [debug] admin_access_log = "logs/admin_access.log"
2018/06/25 04:30:17 [debug] admin_error_log = "logs/error.log"
2018/06/25 04:30:17 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/25 04:30:17 [debug] anonymous_reports = true
2018/06/25 04:30:17 [debug] cassandra_consistency = "LOCAL_QUORUM"
2018/06/25 04:30:17 [debug] cassandra_contact_points = {"server00471"}
2018/06/25 04:30:17 [debug] cassandra_data_centers = {"DC2:2","DC1:2"}
2018/06/25 04:30:17 [debug] cassandra_keyspace = "kong_dev_new"
2018/06/25 04:30:17 [debug] cassandra_lb_policy = "DCAwareRoundRobin"
2018/06/25 04:30:17 [debug] cassandra_local_datacenter = "DC2"
2018/06/25 04:30:17 [debug] cassandra_password = "******"
2018/06/25 04:30:17 [debug] cassandra_port = 9042
2018/06/25 04:30:17 [debug] cassandra_repl_factor = 2
2018/06/25 04:30:17 [debug] cassandra_repl_strategy = "NetworkTopologyStrategy"
2018/06/25 04:30:17 [debug] cassandra_schema_consensus_timeout = 60000
2018/06/25 04:30:17 [debug] cassandra_ssl = true
2018/06/25 04:30:17 [debug] cassandra_ssl_verify = true
2018/06/25 04:30:17 [debug] cassandra_timeout = 10000
2018/06/25 04:30:17 [debug] cassandra_username = "******"
2018/06/25 04:30:17 [debug] client_body_buffer_size = "8k"
2018/06/25 04:30:17 [debug] client_max_body_size = "0"
2018/06/25 04:30:17 [debug] client_ssl = false
2018/06/25 04:30:17 [debug] custom_plugins = {}
2018/06/25 04:30:17 [debug] database = "cassandra"
2018/06/25 04:30:17 [debug] db_cache_ttl = 0
2018/06/25 04:30:17 [debug] db_update_frequency = 5
2018/06/25 04:30:17 [debug] db_update_propagation = 0
2018/06/25 04:30:17 [debug] dns_error_ttl = 1
2018/06/25 04:30:17 [debug] dns_hostsfile = "/etc/hosts"
2018/06/25 04:30:17 [debug] dns_no_sync = false
2018/06/25 04:30:17 [debug] dns_not_found_ttl = 30
2018/06/25 04:30:17 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/25 04:30:17 [debug] dns_resolver = {}
2018/06/25 04:30:17 [debug] dns_stale_ttl = 4
2018/06/25 04:30:17 [debug] error_default_type = "text/plain"
2018/06/25 04:30:17 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/25 04:30:17 [debug] log_level = "notice"
2018/06/25 04:30:17 [debug] lua_package_cpath = ""
2018/06/25 04:30:17 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/25 04:30:17 [debug] lua_socket_pool_size = 30
2018/06/25 04:30:17 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/25 04:30:17 [debug] lua_ssl_verify_depth = 3
2018/06/25 04:30:17 [debug] mem_cache_size = "128m"
2018/06/25 04:30:17 [debug] nginx_admin_directives = {}
2018/06/25 04:30:17 [debug] nginx_daemon = "off"
2018/06/25 04:30:17 [debug] nginx_http_directives = {}
2018/06/25 04:30:17 [debug] nginx_optimizations = true
2018/06/25 04:30:17 [debug] nginx_proxy_directives = {}
2018/06/25 04:30:17 [debug] nginx_user = "nobody nobody"
2018/06/25 04:30:17 [debug] nginx_worker_processes = "auto"
2018/06/25 04:30:17 [debug] pg_ssl = false
2018/06/25 04:30:17 [debug] pg_ssl_verify = false
2018/06/25 04:30:17 [debug] plugins = {"bundled"}
2018/06/25 04:30:17 [debug] prefix = "/usr/local/kong/"
2018/06/25 04:30:17 [debug] proxy_access_log = "logs/access.log"
2018/06/25 04:30:17 [debug] proxy_error_log = "logs/error.log"
2018/06/25 04:30:17 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/25 04:30:17 [debug] real_ip_header = "X-Real-IP"
2018/06/25 04:30:17 [debug] real_ip_recursive = "off"
2018/06/25 04:30:17 [debug] ssl_cipher_suite = "modern"
2018/06/25 04:30:17 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/25 04:30:17 [debug] trusted_ips = {}
2018/06/25 04:30:17 [debug] upstream_keepalive = 60
Error:
...ocal/share/lua/5.1/resty/cassandra/policies/lb/dc_rr.lua:71: attempt to get length of field 'local_peers' (a nil value)
stack traceback:
...ocal/share/lua/5.1/resty/cassandra/policies/lb/dc_rr.lua:71: in function 'iter'
/usr/local/share/lua/5.1/resty/cassandra/cluster.lua:422: in function 'next_coordinator'
...share/lua/5.1/kong/db/strategies/cassandra/connector.lua:180: in function 'query'
2018/06/25 04:30:17 [verbose] prefix in use: /usr/local/kong
2018/06/25 04:30:17 [verbose] running datastore migrations
2018/06/25 04:30:18 [info] migrating core for keyspace kong_dev_new
2018/06/25 04:30:18 [info] core migrated up to: 2018-03-22-141700_create_new_ssl_tables
2018/06/25 04:30:18 [info] core migrated up to: 2018-03-26-234600_copy_records_to_new_ssl_tables
2018/06/25 04:30:19 [info] core migrated up to: 2018-03-27-002500_drop_old_ssl_tables
2018/06/25 04:30:19 [info] core migrated up to: 2018-03-16-160000_index_consumers
2018/06/25 04:30:19 [info] core migrated up to: 2018-05-17-173100_hash_on_cookie
2018/06/25 04:30:19 [info] migrating jwt for keyspace kong_dev_new
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:83: in function 'is_partitioned'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:122: in function 'build_queries'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:197: in function 'get_query'
...ocal/share/lua/5.1/kong/db/strategies/cassandra/init.lua:587: in function 'select'
/usr/local/share/lua/5.1/kong/db/dao/init.lua:236: in function 'select'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:409: in function 'check_foreign_key_in_new_db'
/usr/local/share/lua/5.1/kong/dao/db/cassandra.lua:438: in function 'check_foreign_constraints'
...
/usr/local/share/lua/5.1/kong/dao/factory.lua:417: in function 'migrate'
/usr/local/share/lua/5.1/kong/dao/factory.lua:535: in function 'run_migrations'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:35: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:87>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:44>
/usr/local/bin/kong:7: in function 'file_gen'
init_worker_by_lua:48: in function <init_worker_by_lua:46>
[C]: in function 'xpcall'
init_worker_by_lua:55: in function <init_worker_by_lua:53>
As promised, extra DD here. Within the ZIP you will find SCHEMA of the newer minimal keyspace pre-0.14.0rc1 migrations, and then post 0.14.0rc1 migrations. Also find attached the table data as CSV, as you will see its fairly minimal with only some very basic tables populated with 10 entries or less or empty in most cases. I have a hunch its still somewhere in the jwt migrations logics and existing jwt related tables populated. Take no concern at oauth2/jwt secrets and such in here as this dev env will never see the light of day(external exposure) lol. I suspect if you import the schema and use cassandra "copy from" to bring in the csv data(make sure to include that jwt data) you will be able to replicate the error I have been seeing. Let me know if I can provide any other tools for the arsenal but I think I exhausted it with this latest drop.
Well it seems to fail at jwt time but then the initial error is failed to get peers(for rr policy) or failed to get local_peers(dc aware rr) being nil. So really I can't say I have any real idea towards root cause but I am confident its not kong->c* connectivity(due to user supplied parameters).
Thanks @jeremyjpj0916, I can confirm that we can reproduce the error! Really great! Thanks a lot. I will try to figure out the cause next.
@bungle Better to debug it now rather than later when all the open source users flooding in when 0.14.0 goes GA and everyone attempts migrations π. Reproducing is a first great step though! If yall were unable to do so, I would have started forking cassandra-lua logic and adding tons of ngx print statements and I probably would have been a bit out of my league at this stage in my competency lol so really glad the pros can see it occurring now!
@jeremyjpj0916
I was able to found the issue. It seems we have some places in code where our new model's connector is not initialized, and that causes problems, especially on those migrations that use new model.
So changing this line: https://github.com/Kong/kong/blob/release/0.14.0/kong/cmd/migrations.lua#L32
local dao = assert(DAOFactory.new(conf, DB.new(conf)))
TO
local db = DB.new(conf)
assert(db:init_connector())
local dao = assert(DAOFactory.new(conf, db))
Should fix the problem for migrations at least. We are still investigating other places where we might have missed this.
So thanks again hundred times!
One last ask, could you also try that change in your environment (so that we can be sure that we caught the right issue here)?
@bungle Happy to test such logic. Could that hotfix build be pushed to the 0.14.0rc1 tarball on bintray and docker-kong updated with new checksum (https://github.com/Kong/docker-kong)? That would make it the easiest for me to validate since my migrations logic is all based on docker-kong implementation and added benefit is I can continue testing 0.14.0rc1 on deployment as well with the incorporated hotfix changes. If not I have workarounds I can strong arm to deviate from how i run migrations/deploys today and apply all those changes but with our cloud platform we deploy on its not super flexible(we cant just ssh into a pod and make changes, we lose write access so it takes some dockerfile hax with our builds #nobueno).
Now that we have a upgrade tests in place that reproduce this, and the fix was merged, I am confident to close this. Thanks again @jeremyjpj0916.
@jeremyjpj0916 I will reopen this, as we decided to make rc2 release (should be out in a week). So we'd like to still hear if that migrates cleanly on your environment, before releasing actual 0.14.0.
@bungle Sounds like the best approach. Once rc2 is out in the wild I will also validate the major changes(like enhanced logging and new jwt expiry and admin API changes)!
@jeremyjpj0916 Awesome! We thought that not being able to migrate to 0.14.0rc1 would be a big blocker to test the actual 0.14 features, so... rc2 it is! Glad you will use it for testing.
should be out in a week
A lot sooner than this I hope! Maybe today or tomorrow.
@thibaultcha I am a big fan of #sameDayOps π (where the days of provisioning/support that takes weeks are gone. I don't care about the methodology(agile vs waterfall(although don't do this one lol) etc. ) to get there so much, just that motivation and talented engineering drives results). I like the idea of the rc's coming out early before the grand release as well, lets bugs like this one get caught by users to save you on support question woes later in community or enterprise.
rc2 is now available for testing :) The images should be on Docker Hub pretty soon.
https://discuss.konghq.com/t/kong-ce-0-14-0rc2-available-for-testing/1339
@thibaultcha Great to hear. Will start testing again this evening and validate a good amount tomorrow too.
p.s. Was great to talk to you + Cooper, lets do it again sometime later down the road when I have more insights to share that could benefit Kong and the community. Have a good evening!
I thought the smoke had cleared and migrations would be a go, yet we ran into a new snag with the 0.14.0rc2 migrations now. Let me drop logs and provide my first thoughts, will also discuss with my partner later tomorrow to see if we have any further ideas brainstorming together.
New 0.14.0rc2 log response:
2018/06/29 06:23:20 [verbose] Kong: 0.14.0rc2
2018/06/29 06:23:20 [debug] ngx_lua: 10013
2018/06/29 06:23:20 [debug] nginx: 1013006
2018/06/29 06:23:20 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/29 06:23:20 [verbose] no config file found at /etc/kong/kong.conf
2018/06/29 06:23:20 [verbose] no config file found at /etc/kong.conf
2018/06/29 06:23:20 [verbose] no config file, skipping loading
2018/06/29 06:23:20 [debug] reading environment variables
2018/06/29 06:23:20 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/29 06:23:20 [debug] KONG_PG_USER ENV found with ""
2018/06/29 06:23:20 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/29 06:23:20 [debug] KONG_PG_HOST ENV found with ""
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/29 06:23:20 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/29 06:23:20 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC2"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "ONE"
2018/06/29 06:23:20 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/29 06:23:20 [debug] KONG_PG_SSL ENV found with "off"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "NetworkTopologyStrategy"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "120000"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "DCAwareRoundRobin"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "20000"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC2:2,DC1:2"
2018/06/29 06:23:20 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/29 06:23:20 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/29 06:23:20 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/29 06:23:20 [debug] KONG_PG_PORT ENV found with ""
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/29 06:23:20 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev"
2018/06/29 06:23:20 [debug] admin_access_log = "logs/admin_access.log"
2018/06/29 06:23:20 [debug] admin_error_log = "logs/error.log"
2018/06/29 06:23:20 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/29 06:23:20 [debug] anonymous_reports = true
2018/06/29 06:23:20 [debug] cassandra_consistency = "ONE"
2018/06/29 06:23:20 [debug] cassandra_contact_points = {"server00471"}
2018/06/29 06:23:20 [debug] cassandra_data_centers = {"DC2:2","DC1:2"}
2018/06/29 06:23:20 [debug] cassandra_keyspace = "kong_dev"
2018/06/29 06:23:20 [debug] cassandra_lb_policy = "DCAwareRoundRobin"
2018/06/29 06:23:20 [debug] cassandra_local_datacenter = "DC2"
2018/06/29 06:23:20 [debug] cassandra_password = "******"
2018/06/29 06:23:20 [debug] cassandra_port = 9042
2018/06/29 06:23:20 [debug] cassandra_repl_factor = 2
2018/06/29 06:23:20 [debug] cassandra_repl_strategy = "NetworkTopologyStrategy"
2018/06/29 06:23:20 [debug] cassandra_schema_consensus_timeout = 120000
2018/06/29 06:23:20 [debug] cassandra_ssl = true
2018/06/29 06:23:20 [debug] cassandra_ssl_verify = true
2018/06/29 06:23:20 [debug] cassandra_timeout = 20000
2018/06/29 06:23:20 [debug] cassandra_username = "*****"
2018/06/29 06:23:20 [debug] client_body_buffer_size = "8k"
2018/06/29 06:23:20 [debug] client_max_body_size = "0"
2018/06/29 06:23:20 [debug] client_ssl = false
2018/06/29 06:23:20 [debug] custom_plugins = {}
2018/06/29 06:23:20 [debug] database = "cassandra"
2018/06/29 06:23:20 [debug] db_cache_ttl = 0
2018/06/29 06:23:20 [debug] db_update_frequency = 5
2018/06/29 06:23:20 [debug] db_update_propagation = 0
2018/06/29 06:23:20 [debug] dns_error_ttl = 1
2018/06/29 06:23:20 [debug] dns_hostsfile = "/etc/hosts"
2018/06/29 06:23:20 [debug] dns_no_sync = false
2018/06/29 06:23:20 [debug] dns_not_found_ttl = 30
2018/06/29 06:23:20 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/29 06:23:20 [debug] dns_resolver = {}
2018/06/29 06:23:20 [debug] dns_stale_ttl = 4
2018/06/29 06:23:20 [debug] error_default_type = "text/plain"
2018/06/29 06:23:20 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/29 06:23:20 [debug] log_level = "notice"
2018/06/29 06:23:20 [debug] lua_package_cpath = ""
2018/06/29 06:23:20 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/29 06:23:20 [debug] lua_socket_pool_size = 30
2018/06/29 06:23:20 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/29 06:23:20 [debug] lua_ssl_verify_depth = 3
2018/06/29 06:23:20 [debug] mem_cache_size = "128m"
2018/06/29 06:23:20 [debug] nginx_admin_directives = {}
2018/06/29 06:23:20 [debug] nginx_daemon = "off"
2018/06/29 06:23:20 [debug] nginx_http_directives = {}
2018/06/29 06:23:20 [debug] nginx_optimizations = true
2018/06/29 06:23:20 [debug] nginx_proxy_directives = {}
2018/06/29 06:23:20 [debug] nginx_user = "nobody nobody"
2018/06/29 06:23:20 [debug] nginx_worker_processes = "auto"
2018/06/29 06:23:20 [debug] pg_ssl = false
2018/06/29 06:23:20 [debug] pg_ssl_verify = false
2018/06/29 06:23:20 [debug] plugins = {"bundled"}
2018/06/29 06:23:20 [debug] prefix = "/usr/local/kong/"
2018/06/29 06:23:20 [debug] proxy_access_log = "logs/access.log"
2018/06/29 06:23:20 [debug] proxy_error_log = "logs/error.log"
2018/06/29 06:23:20 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/29 06:23:20 [debug] real_ip_header = "X-Real-IP"
2018/06/29 06:23:20 [debug] real_ip_recursive = "off"
2018/06/29 06:23:20 [debug] ssl_cipher_suite = "modern"
2018/06/29 06:23:20 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/29 06:23:20 [debug] trusted_ips = {}
2018/06/29 06:23:20 [debug] upstream_keepalive = 60
Error:
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:33: all hosts tried for query failed. server00471: SSL handshake: 19: self signed certificate in certificate chain
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/cmd/migrations.lua:33: in function 'cmd_exec'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:87>
[C]: in function 'xpcall'
/usr/local/share/lua/5.1/kong/cmd/init.lua:87: in function </usr/local/share/lua/5.1/kong/cmd/init.lua:44>
/usr/local/bin/kong:7: in function 'file_gen'
init_worker_by_lua:48: in function <init_worker_by_lua:46>
[C]: in function 'xpcall'
init_worker_by_lua:55: in function <init_worker_by_lua:53>
2018/06/29 06:23:20 [verbose] prefix in use: /usr/local/kong
So it was complaining about the fact our certs are internally generated/self-signed? (Our company itself is its own internal CA and they generate as such). So our /usr/local/kong/ssl/kongcert.pem contains the Root CA and well as 2 intermediaries, hence another reason we do lua_ssl_verify_depth = 3. This has never given us trouble in the past during migrations.
kongcert.pem:
subject=CN=CompanyInternalIssuingCA2, O=Company, L=City, S=State, C=US
issuer=CN=Company Internal Policy CA, O=Company, L=City, S=State, C=US
-----BEGIN CERTIFICATE-----
... stuff here...
-----END CERTIFICATE-----
subject=CN=Company Internal Policy CA, O=Company, L=City, S=State, C=US
issuer=CN=Company Root CA, O=Company, L=City, S=State, C=US
-----BEGIN CERTIFICATE-----
... stuff here...
-----END CERTIFICATE-----
subject=CN=Company Root CA, O=Company, L=City, S=State, C=US
issuer=CN=Company Root CA, O=Company, L=City, S=State, C=US
-----BEGIN CERTIFICATE-----
... stuff here...
-----END CERTIFICATE-----
So then I thought well could we have messed up something else or changed any cert files so I looked at dates on all our cloud "secrets" containing this pem, and on the C* servers themselves and no certs had changed so just to make sure I was not going insane I decided to run Migrations using the 0.13.1 image against the same node to make sure there was no connection problems there:
2018/06/29 06:42:15 [verbose] Kong: 0.13.1
2018/06/29 06:42:15 [debug] ngx_lua: 10011
2018/06/29 06:42:15 [debug] nginx: 1013006
2018/06/29 06:42:15 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/29 06:42:15 [verbose] no config file found at /etc/kong/kong.conf
2018/06/29 06:42:15 [verbose] no config file found at /etc/kong.conf
2018/06/29 06:42:15 [verbose] no config file, skipping loading
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/29 06:42:15 [debug] KONG_PG_USER ENV found with ""
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/29 06:42:15 [debug] KONG_PG_HOST ENV found with ""
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/29 06:42:15 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/29 06:42:15 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC2"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "ONE"
2018/06/29 06:42:15 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/29 06:42:15 [debug] KONG_PG_SSL ENV found with "off"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "NetworkTopologyStrategy"
2018/06/29 06:42:15 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/29 06:42:15 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "120000"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "20000"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/29 06:42:15 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/29 06:42:15 [debug] KONG_PG_PORT ENV found with ""
2018/06/29 06:42:15 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/29 06:42:15 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC2:2,DC1:2"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev"
2018/06/29 06:42:15 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "RoundRobin"
2018/06/29 06:42:15 [debug] admin_access_log = "logs/admin_access.log"
2018/06/29 06:42:15 [debug] admin_error_log = "logs/error.log"
2018/06/29 06:42:15 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/29 06:42:15 [debug] anonymous_reports = true
2018/06/29 06:42:15 [debug] cassandra_consistency = "ONE"
2018/06/29 06:42:15 [debug] cassandra_contact_points = {"server00471"}
2018/06/29 06:42:15 [debug] cassandra_data_centers = {"DC2:2","DC1:2"}
2018/06/29 06:42:15 [debug] cassandra_keyspace = "kong_dev"
2018/06/29 06:42:15 [debug] cassandra_lb_policy = "RoundRobin"
2018/06/29 06:42:15 [debug] cassandra_local_datacenter = "DC2"
2018/06/29 06:42:15 [debug] cassandra_password = "******"
2018/06/29 06:42:15 [debug] cassandra_port = 9042
2018/06/29 06:42:15 [debug] cassandra_repl_factor = 2
2018/06/29 06:42:15 [debug] cassandra_repl_strategy = "NetworkTopologyStrategy"
2018/06/29 06:42:15 [debug] cassandra_schema_consensus_timeout = 120000
2018/06/29 06:42:15 [debug] cassandra_ssl = true
2018/06/29 06:42:15 [debug] cassandra_ssl_verify = true
2018/06/29 06:42:15 [debug] cassandra_timeout = 20000
2018/06/29 06:42:15 [debug] cassandra_username = "*****"
2018/06/29 06:42:15 [debug] client_body_buffer_size = "8k"
2018/06/29 06:42:15 [debug] client_max_body_size = "0"
2018/06/29 06:42:15 [debug] client_ssl = false
2018/06/29 06:42:15 [debug] custom_plugins = {}
2018/06/29 06:42:15 [debug] database = "cassandra"
2018/06/29 06:42:15 [debug] db_cache_ttl = 3600
2018/06/29 06:42:15 [debug] db_update_frequency = 5
2018/06/29 06:42:15 [debug] db_update_propagation = 0
2018/06/29 06:42:15 [debug] dns_error_ttl = 1
2018/06/29 06:42:15 [debug] dns_hostsfile = "/etc/hosts"
2018/06/29 06:42:15 [debug] dns_no_sync = false
2018/06/29 06:42:15 [debug] dns_not_found_ttl = 30
2018/06/29 06:42:15 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/29 06:42:15 [debug] dns_resolver = {}
2018/06/29 06:42:15 [debug] dns_stale_ttl = 4
2018/06/29 06:42:15 [debug] error_default_type = "text/plain"
2018/06/29 06:42:15 [debug] latency_tokens = true
2018/06/29 06:42:15 [debug] log_level = "notice"
2018/06/29 06:42:15 [debug] lua_package_cpath = ""
2018/06/29 06:42:15 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/29 06:42:15 [debug] lua_socket_pool_size = 30
2018/06/29 06:42:15 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/29 06:42:15 [debug] lua_ssl_verify_depth = 3
2018/06/29 06:42:15 [debug] mem_cache_size = "128m"
2018/06/29 06:42:15 [debug] nginx_daemon = "off"
2018/06/29 06:42:15 [debug] nginx_optimizations = true
2018/06/29 06:42:15 [debug] nginx_user = "nobody nobody"
2018/06/29 06:42:15 [debug] nginx_worker_processes = "auto"
2018/06/29 06:42:15 [debug] pg_ssl = false
2018/06/29 06:42:15 [debug] pg_ssl_verify = false
2018/06/29 06:42:15 [debug] prefix = "/usr/local/kong/"
2018/06/29 06:42:15 [debug] proxy_access_log = "logs/access.log"
2018/06/29 06:42:15 [debug] proxy_error_log = "logs/error.log"
2018/06/29 06:42:15 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/29 06:42:15 [debug] real_ip_header = "X-Real-IP"
2018/06/29 06:42:15 [debug] real_ip_recursive = "off"
2018/06/29 06:42:15 [debug] server_tokens = true
2018/06/29 06:42:15 [debug] ssl_cipher_suite = "modern"
2018/06/29 06:42:15 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/29 06:42:15 [debug] trusted_ips = {}
2018/06/29 06:42:15 [debug] upstream_keepalive = 60
2018/06/29 06:42:15 [verbose] prefix in use: /usr/local/kong
2018/06/29 06:42:15 [verbose] running datastore migrations
2018/06/29 06:42:15 [verbose] migrations up to date
So 0.13.1's migrations against the same C* host under the same cloud pod environment settings yielded a successful connection and validated the keyspace was indeed up to date just now. My first thoughts are that maybe Kong under the hood has upgraded some of its tls libraries(I believe I saw some dependency bump ups awhile back) for 0.14.0rc2. Maybe that change requires new flags to be added to enable self-signed certificate use outside the common CA's imported into our browsers by default? Or a change to OpenResty/Nginx might relate to this? Other thoughts are maybe the new 0.14.0rc2 DB context just lacks some of the settings/flags that use to exist in the past that helped enable tls connections (much like it was lacking initialization in certain places for our prior problem). Just seems weird if you have a truststore and reference it I feel like apps should trust the truststore. This just adds to my dislike of cert management and the integration pains they cause hah.
@bungle @thibaultcha So we tested with openssl, kong is very likely ignoring its own truststore in the latest iteration... so the refactored code is not setting(or may be ignoring) the truststore(lua_ssl_trusted_certificate) set in the env variables.
Without the PEM
$ openssl s_client -connect server00471:9042 -debug
Certificate chain
0 s:/C=US/ST=Minnesota/L=Plymouth/O=UnitedHealth Group Inc./OU=Data Externalization/CN=server00471.uhc.com
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=OptumInternalIssuingCA2
1 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=OptumInternalIssuingCA2
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Internal Policy CA
2 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Internal Policy CA
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
3 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 5B36A318340617CB4D26DC784F5F1C773CACA2970CB86FD32CF5BCDB04A1300A
Session-ID-ctx:
Master-Key: A4CFE79D16F2A0F5987D83A246B02F99A65F47A6877BB674BB80329BDB0A33A8B10AAC67A3921CD687C9984A538ED140
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1530307352
Timeout : 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain) <-- 19 error again, see without PEM given.
With the PEM
$ openssl s_client -connect server00471:9042 -debug -CAfile /usr/local/kong/ssl/kongcert.pem
Certificate chain
0 s:/C=US/ST=Minnesota/L=Plymouth/O=UnitedHealth Group Inc./OU=Data Externalization/CN=server00471.uhc.com
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=OptumInternalIssuingCA2
1 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=OptumInternalIssuingCA2
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Internal Policy CA
2 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Internal Policy CA
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
3 s:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
i:/C=US/ST=Minnesota/L=Minneapolis/O=Optum/CN=Optum Root CA
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 5B36A2DDA5F646C9054F6F32F37B658B99D31BB416961149CBDEF725D7E3756D
Session-ID-ctx:
Master-Key: 90030ABAFF78766510BFB0CFC07BD0395FE39AF2CB1BC1526E873622B5F0C870D9E9F5D290C590DFB858489138BE3879
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1530307293
Timeout : 300 (sec)
Verify return code: 0 (ok)
Thanks @jeremyjpj0916 for the prompt feedback. We are looking into this...
@jeremyjpj0916 Alright, I think I pinpointed the issue - it's an unfortunate mistake on our side... I should have a fix up in a few minutes.
@thibaultcha I don't know any other company on the planet that solves a bug in 9 minutes of review, so fantastic investigations by yourself there! rc3 coming on the way π ?
Nothing too complicated, I recall encountering the same issue some time ago, and a bit of archaeology (thanks to our nice commit message formats) pointed me to #2908, and I realized this fix is not in our new DAO (and it did not show up until now because our new DAO was never used by the migrations, until 0.14.0).
rc3 will be out soon indeed, especially since a few nice-to-have popped up and should be part of 0.14.0:
@jeremyjpj0916 Here we go, 0.14.0rc3 is out, and the Docker Image is updated :)
As a bonus, it ships with a new configuration property db_resurrect_ttl
, a fix in the Docker image (https://github.com/Kong/docker-kong/pull/166), and a Zipkin bugfix :smile:
@thibaultcha starting to wonder if internally you have the reputation as the guy who does not need sleep haha π― . Will certainly be saving this github issue to show people later the value in Open Source and what its like working with driven engineers who care about their products. Thanks again! Will test over the weekend as I get some time.
Will test over the weekend as I get some time.
That would be amazing! Thanks a lot (in advance)! We will also do some testing on our side starting Monday, especially with regards to the new cache behavior.
p.s. Was great to talk to you + Cooper, lets do it again sometime later down the road
Delighted to have met you too!
Can confirm working now!
2018/06/30 19:56:14 [verbose] Kong: 0.14.0rc3
2018/06/30 19:56:14 [debug] ngx_lua: 10013
2018/06/30 19:56:14 [debug] nginx: 1013006
2018/06/30 19:56:14 [debug] Lua: LuaJIT 2.1.0-beta3
2018/06/30 19:56:14 [verbose] no config file found at /etc/kong/kong.conf
2018/06/30 19:56:14 [verbose] no config file found at /etc/kong.conf
2018/06/30 19:56:14 [verbose] no config file, skipping loading
2018/06/30 19:56:14 [debug] reading environment variables
2018/06/30 19:56:14 [debug] KONG_LUA_SSL_VERIFY_DEPTH ENV found with "3"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_USERNAME ENV found with "*****"
2018/06/30 19:56:14 [debug] KONG_PG_USER ENV found with ""
2018/06/30 19:56:14 [debug] KONG_PG_PASSWORD ENV found with "******"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_PASSWORD ENV found with "******"
2018/06/30 19:56:14 [debug] KONG_PG_HOST ENV found with ""
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_SSL ENV found with "on"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_PORT ENV found with "9042"
2018/06/30 19:56:14 [debug] KONG_DATABASE ENV found with "cassandra"
2018/06/30 19:56:14 [debug] KONG_PG_DATABASE ENV found with ""
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_CONTACT_POINTS ENV found with "server00471"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_LOCAL_DATACENTER ENV found with "DC2"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_CONSISTENCY ENV found with "LOCAL_QUORUM"
2018/06/30 19:56:14 [debug] KONG_PG_SSL_VERIFY ENV found with ""
2018/06/30 19:56:14 [debug] KONG_PG_SSL ENV found with "off"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_REPL_STRATEGY ENV found with "NetworkTopologyStrategy"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_LB_POLICY ENV found with "DCAwareRoundRobin"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_SCHEMA_CONSENSUS_TIMEOUT ENV found with "120000"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_TIMEOUT ENV found with "20000"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_SSL_VERIFY ENV found with "on"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_DATA_CENTERS ENV found with "DC2:2,DC1:2"
2018/06/30 19:56:14 [debug] KONG_NGINX_DAEMON ENV found with "off"
2018/06/30 19:56:14 [warn] You are using Cassandra but your 'db_update_propagation' setting is set to '0' (default). Due to the distributed nature of Cassandra, you should increase this value.
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_REPL_FACTOR ENV found with "2"
2018/06/30 19:56:14 [debug] KONG_PG_PORT ENV found with ""
2018/06/30 19:56:14 [debug] KONG_LUA_SSL_TRUSTED_CERTIFICATE ENV found with "/usr/local/kong/ssl/kongcert.pem"
2018/06/30 19:56:14 [debug] KONG_CASSANDRA_KEYSPACE ENV found with "kong_dev"
2018/06/30 19:56:14 [debug] admin_access_log = "logs/admin_access.log"
2018/06/30 19:56:14 [debug] admin_error_log = "logs/error.log"
2018/06/30 19:56:14 [debug] admin_listen = {"127.0.0.1:8001","127.0.0.1:8444 ssl"}
2018/06/30 19:56:14 [debug] anonymous_reports = true
2018/06/30 19:56:14 [debug] cassandra_consistency = "LOCAL_QUORUM"
2018/06/30 19:56:14 [debug] cassandra_contact_points = {"server00471"}
2018/06/30 19:56:14 [debug] cassandra_data_centers = {"DC2:2","DC1:2"}
2018/06/30 19:56:14 [debug] cassandra_keyspace = "kong_dev"
2018/06/30 19:56:14 [debug] cassandra_lb_policy = "DCAwareRoundRobin"
2018/06/30 19:56:14 [debug] cassandra_local_datacenter = "DC2"
2018/06/30 19:56:14 [debug] cassandra_password = "******"
2018/06/30 19:56:14 [debug] cassandra_port = 9042
2018/06/30 19:56:14 [debug] cassandra_repl_factor = 2
2018/06/30 19:56:14 [debug] cassandra_repl_strategy = "NetworkTopologyStrategy"
2018/06/30 19:56:14 [debug] cassandra_schema_consensus_timeout = 120000
2018/06/30 19:56:14 [debug] cassandra_ssl = true
2018/06/30 19:56:14 [debug] cassandra_ssl_verify = true
2018/06/30 19:56:14 [debug] cassandra_timeout = 20000
2018/06/30 19:56:14 [debug] cassandra_username = "*****"
2018/06/30 19:56:14 [debug] client_body_buffer_size = "8k"
2018/06/30 19:56:14 [debug] client_max_body_size = "0"
2018/06/30 19:56:14 [debug] client_ssl = false
2018/06/30 19:56:14 [debug] custom_plugins = {}
2018/06/30 19:56:14 [debug] database = "cassandra"
2018/06/30 19:56:14 [debug] db_cache_ttl = 0
2018/06/30 19:56:14 [debug] db_resurrect_ttl = 30
2018/06/30 19:56:14 [debug] db_update_frequency = 5
2018/06/30 19:56:14 [debug] db_update_propagation = 0
2018/06/30 19:56:14 [debug] dns_error_ttl = 1
2018/06/30 19:56:14 [debug] dns_hostsfile = "/etc/hosts"
2018/06/30 19:56:14 [debug] dns_no_sync = false
2018/06/30 19:56:14 [debug] dns_not_found_ttl = 30
2018/06/30 19:56:14 [debug] dns_order = {"LAST","SRV","A","CNAME"}
2018/06/30 19:56:14 [debug] dns_resolver = {}
2018/06/30 19:56:14 [debug] dns_stale_ttl = 4
2018/06/30 19:56:14 [debug] error_default_type = "text/plain"
2018/06/30 19:56:14 [debug] headers = {"server_tokens","latency_tokens"}
2018/06/30 19:56:14 [debug] log_level = "notice"
2018/06/30 19:56:14 [debug] lua_package_cpath = ""
2018/06/30 19:56:14 [debug] lua_package_path = "./?.lua;./?/init.lua;"
2018/06/30 19:56:14 [debug] lua_socket_pool_size = 30
2018/06/30 19:56:14 [debug] lua_ssl_trusted_certificate = "/usr/local/kong/ssl/kongcert.pem"
2018/06/30 19:56:14 [debug] lua_ssl_verify_depth = 3
2018/06/30 19:56:14 [debug] mem_cache_size = "128m"
2018/06/30 19:56:14 [debug] nginx_admin_directives = {}
2018/06/30 19:56:14 [debug] nginx_daemon = "off"
2018/06/30 19:56:14 [debug] nginx_http_directives = {}
2018/06/30 19:56:14 [debug] nginx_optimizations = true
2018/06/30 19:56:14 [debug] nginx_proxy_directives = {}
2018/06/30 19:56:14 [debug] nginx_user = "nobody nobody"
2018/06/30 19:56:14 [debug] nginx_worker_processes = "auto"
2018/06/30 19:56:14 [debug] pg_ssl = false
2018/06/30 19:56:14 [debug] pg_ssl_verify = false
2018/06/30 19:56:14 [debug] plugins = {"bundled"}
2018/06/30 19:56:14 [debug] prefix = "/usr/local/kong/"
2018/06/30 19:56:14 [debug] proxy_access_log = "logs/access.log"
2018/06/30 19:56:14 [debug] proxy_error_log = "logs/error.log"
2018/06/30 19:56:14 [debug] proxy_listen = {"0.0.0.0:8000","0.0.0.0:8443 ssl"}
2018/06/30 19:56:14 [debug] real_ip_header = "X-Real-IP"
2018/06/30 19:56:14 [debug] real_ip_recursive = "off"
2018/06/30 19:56:14 [debug] ssl_cipher_suite = "modern"
2018/06/30 19:56:14 [debug] ssl_ciphers = "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256"
2018/06/30 19:56:14 [debug] trusted_ips = {}
2018/06/30 19:56:14 [debug] upstream_keepalive = 60
2018/06/30 19:56:14 [verbose] prefix in use: /usr/local/kong
2018/06/30 19:56:14 [verbose] running datastore migrations
2018/06/30 19:56:14 [info] migrating jwt for keyspace kong_dev
2018/06/30 19:56:15 [info] jwt migrated up to: 2018-03-15-150000_jwt_maximum_expiration
2018/06/30 19:56:15 [info] 1 migrations ran
2018/06/30 19:56:15 [info] waiting for Cassandra schema consensus (120000ms timeout)...
2018/06/30 19:56:15 [info] Cassandra schema consensus: reached
2018/06/30 19:56:15 [verbose] migrations up to date
Onward to the features tests π
Gonna close this since all issues around migrations are resolved π , will individually post other bugs found in other issues so this does not turn into a monolith π . Thanks all for the diligent work in bug fixes for this thread.
Summary
Decided to give migrations a shot in our DEV environment with plain Kong image against our Cassandra 3.x cluster. Upgrading from 0.13.1 C* db to 0.14.0rc1 the following is the exact output of the logs:
I have no custom template loaded during this migrations or anything, the way I do it is I stand up a running Openshift "job" pod that just runs baseline Kong for migrations. The keyspace I was running this on DID exist as well as the logs mention.
Steps To Reproduce
Have an existing Kong cluster on 0.13.1 with Cassandra 3.x* as backend with plenty of routes/services/plugins and such.
Stand up a fresh instance of Kong on another server with barebones 0.14.0rc1 and run a migrations call against your existing C* database cluster running 0.13.1 Kong that is multi DC.
Maybe see a similar error?
Additional Details & Logs