koli / kong-ingress

[DEPRECATED] A Kubernetes Ingress for Kong
Other
93 stars 14 forks source link

Support for http2/grpc #20

Closed remster closed 6 years ago

remster commented 6 years ago

I can see that kong seemingly supports http-2, but kong-ingress may be behind. Certainly my attempt at a grpc connection yields: error => { Error: Trying to connect an http1.x server

Would that be a known issue?

sandromello commented 6 years ago

Hello remster, Have you enabled the http2 on Kong? You could check if it's enabled in kong admin endpoint / In kubernetes it's easy to enable a new configuration, you could add a new directive enabling http/2 including a new environment variable:

(...)
- name: KONG_HTTP2
  value: "on"
(...)

Important: Be careful, adding a new environment variable could restart all kong pods underneath it depending on your configuration

remster commented 6 years ago

Thanks. As you've advised i added it:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kong
  namespace: kong-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: kong
        app: kong
    spec:
      containers:
      - name: kong
        image: kong:0.11.0
        env:
          - name: KONG_LOG_LEVEL
            value: info
...
          - name: KONG_HTTP2
            value: "on"

With no obvious effect:

{
  "configuration": {
    "admin_ip": "0.0.0.0",
    "admin_listen": "0.0.0.0:8001",
    "admin_listen_ssl": "0.0.0.0:8444",
    "admin_port": 8001,
    "admin_ssl": true,
    "admin_ssl_cert": "/usr/local/kong/ssl/admin-kong-default.crt",
    "admin_ssl_cert_csr_default": "/usr/local/kong/ssl/admin-kong-default.csr",
    "admin_ssl_cert_default": "/usr/local/kong/ssl/admin-kong-default.crt",
    "admin_ssl_cert_key": "/usr/local/kong/ssl/admin-kong-default.key",
    "admin_ssl_cert_key_default": "/usr/local/kong/ssl/admin-kong-default.key",
    "admin_ssl_ip": "0.0.0.0",
    "admin_ssl_port": 8444,
    "anonymous_reports": true,
    "cassandra_consistency": "ONE",
    "cassandra_contact_points": [
      "127.0.0.1"
    ],
    "cassandra_data_centers": [
      "dc1:2",
      "dc2:3"
    ],
    "cassandra_keyspace": "kong",
    "cassandra_lb_policy": "RoundRobin",
    "cassandra_port": 9042,
    "cassandra_repl_factor": 1,
    "cassandra_repl_strategy": "SimpleStrategy",
    "cassandra_ssl": false,
    "cassandra_ssl_verify": false,
    "cassandra_timeout": 5000,
    "cassandra_username": "kong",
    "cluster_listen": "0.0.0.0:7946",
    "cluster_listen_rpc": "127.0.0.1:7373",
    "cluster_profile": "wan",
    "cluster_ttl_on_failure": 3600,
    "custom_plugins": {},
    "database": "postgres",
    "dns_hostsfile": "/etc/hosts",
    "dns_resolver": [
      "10.0.0.10"
    ],
    "kong_env": "/usr/local/kong/.kong_env",
    "log_level": "info",
    "lua_code_cache": "on",
    "lua_package_cpath": "",
    "lua_package_path": "?/init.lua;./kong/?.lua",
    "lua_socket_pool_size": 30,
    "lua_ssl_verify_depth": 1,
    "mem_cache_size": "128m",
    "nginx_acc_logs": "/usr/local/kong/logs/access.log",
    "nginx_admin_acc_logs": "/usr/local/kong/logs/admin_access.log",
    "nginx_conf": "/usr/local/kong/nginx.conf",
    "nginx_daemon": "off",
    "nginx_err_logs": "/usr/local/kong/logs/error.log",
    "nginx_kong_conf": "/usr/local/kong/nginx-kong.conf",
    "nginx_optimizations": true,
    "nginx_pid": "/usr/local/kong/pids/nginx.pid",
    "nginx_worker_processes": "auto",
    "pg_database": "kong",
    "pg_host": "postgres.kong-system.svc.cluster.local",
    "pg_password": "******",
    "pg_port": 5432,
    "pg_ssl": false,
    "pg_ssl_verify": false,
    "pg_user": "kong",
    "plugins": {
      "acl": true,
      "aws-lambda": true,
      "basic-auth": true,
      "bot-detection": true,
      "correlation-id": true,
      "cors": true,
      "datadog": true,
      "file-log": true,
      "galileo": true,
      "hmac-auth": true,
      "http-log": true,
      "ip-restriction": true,
      "jwt": true,
      "key-auth": true,
      "ldap-auth": true,
      "loggly": true,
      "oauth2": true,
      "rate-limiting": true,
      "request-size-limiting": true,
      "request-transformer": true,
      "response-ratelimiting": true,
      "response-transformer": true,
      "runscope": true,
      "statsd": true,
      "syslog": true,
      "tcp-log": true,
      "udp-log": true
    },
    "prefix": "/usr/local/kong",
    "proxy_ip": "0.0.0.0",
    "proxy_listen": "0.0.0.0:8000",
    "proxy_listen_ssl": "0.0.0.0:8443",
    "proxy_port": 8000,
    "proxy_ssl_ip": "0.0.0.0",
    "proxy_ssl_port": 8443,
    "serf_event": "/usr/local/kong/serf/serf_event.sh",
    "serf_log": "/usr/local/kong/logs/serf.log",
    "serf_node_id": "/usr/local/kong/serf/serf.id",
    "serf_path": "serf",
    "serf_pid": "/usr/local/kong/pids/serf.pid",
    "ssl": true,
    "ssl_cert": "/usr/local/kong/ssl/kong-default.crt",
    "ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
    "ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
    "ssl_cert_key": "/usr/local/kong/ssl/kong-default.key",
    "ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
    "upstream_keepalive": 60
  },
  "hostname": "kong-3277574881-bvcbn",
  "lua_version": "LuaJIT 2.1.0-beta2",
  "plugins": {
    "available_on_server": {
      "acl": true,
      "aws-lambda": true,
      "basic-auth": true,
      "bot-detection": true,
      "correlation-id": true,
      "cors": true,
      "datadog": true,
      "file-log": true,
      "galileo": true,
      "hmac-auth": true,
      "http-log": true,
      "ip-restriction": true,
      "jwt": true,
      "key-auth": true,
      "ldap-auth": true,
      "loggly": true,
      "oauth2": true,
      "rate-limiting": true,
      "request-size-limiting": true,
      "request-transformer": true,
      "response-ratelimiting": true,
      "response-transformer": true,
      "runscope": true,
      "statsd": true,
      "syslog": true,
      "tcp-log": true,
      "udp-log": true
    },
    "enabled_in_cluster": {}
  },
  "prng_seeds": {
    "pid: 76": 156351571423,
    "pid: 77": 177262087751
  },
  "tagline": "Welcome to kong",
  "timers": {
    "pending": 5,
    "running": 0
  },
  "version": "0.10.1"
}
sandromello commented 6 years ago

HTTP2 it's only supported in version 0.11+

remster commented 6 years ago

right, so i upgraded to 0.11 and kong-3277705953-z4ttw 0/1 CrashLoopBackOff 4 2m

sandromello commented 6 years ago

You could check the logs or execute a describe to see if there's is any error:

kubectl logs kong-3277705953-z4ttw
kubectl describe po kong-3277705953-z4ttw
remster commented 6 years ago

prefix directory /usr/local/kong not found, trying to create it 2017/09/29 14:12:46 [warn] postgres database 'kong' is missing migration: (response-transformer) 2016-05-04-160000_resp_trans_schema_changes nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong.lua:149: [postgres error] the current database schema does not match this version of Kong. Please run kong migrations up to update/initialize the database schema. Be aware that Kong migrations should only run from a single node, and that nodes running migrations concurrently will conflict with each other and might corrupt your database schema! stack traceback: [C]: in function 'assert' /usr/local/share/lua/5.1/kong.lua:149: in function 'init' init_by_lua:3: in main chunk

remster commented 6 years ago

As you can see i am wrestling with this as I type. Sorry about chaotic comms.

cainelli commented 6 years ago

You should run an upgrade process in order to use a new image. Take a look here.

sandromello commented 6 years ago

If you're running on a test environment it's more easy to tear down your database and start it again, but you will loose all routes and configurations, don't do that in you production environment!

remster commented 6 years ago

i haven't got any routes to loose - i am totally in the sandbox. Which makes me think that I shouldn't need to upgrade as i have nothing to upgrade from. I am trying to install from scratch. So i get that the schema has changed from 0.10 to 0.11, but I do not care about 0.10. So where is the fresh schema for 0.11? Why am i not getting it? I am installing on a virgin minikube

sandromello commented 6 years ago

You started kong with version 0.10, so your schema is persisted with this version. In a sandbox it's more easy to tear down the postgres (it's a replica set: kubectl get rs) database and start a new one, then restart (delete the pod) the kong pod.

remster commented 6 years ago

I am pretty sure I wipe my minikube clean before every attempt. no pods, no virtual machine. I must be missing a spot or kong 0.11 somehow uses an old schema.

We're trying to select an ingress controller based on its support for grpc. I am basically playing with everything I can find.

[remek][~/Projects/x/k8s-ingress-example][master*]$ kubectl get rs -n kong-system
NAME                      DESIRED   CURRENT   READY     AGE
kong-1726600002           0         0         0         27m
kong-3277705953           1         1         0         27m
kong-ingress-1477516618   1         1         0         27m
[remek][~/Projects/x/k8s-ingress-example][master*]$ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
[remek][~/Projects/x/k8s-ingress-example][master*]$ minikube start
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
[remek][~/Projects/x/k8s-ingress-example][master*]$ kubectl get rs -n kong-system
No resources found.
[remek][~/Projects/x/k8s-ingress-example][master*]$ 
cainelli commented 6 years ago

Apparently, is required to upgrade the schema even for fresh installations on 0.11.

https://github.com/Mashape/kong-dist-kubernetes/tree/master/minikube

  1. Prepare database

Using the kongmigration<postgres|cassandra>.yaml file from this repo, run the migration job, >jump to step 5 if Kong backing databse is up–to–date:

$ kubectl create -f kong_migration_<postgres|cassandra>.yaml

Once job completes, you can remove the pod by running following command:

$ kubectl delete -f kong_migration_<postgres|cassandra>.yaml

I'll make sure to point that out in our documentation.

cainelli commented 6 years ago

create a file kong-migration.yaml with the following content:

apiVersion: batch/v1
kind: Job
metadata:
  namespace: kong-system
  name: kong-migration
spec:
  template:
    metadata:
      name: kong-migration
    spec:
      containers:
      - name: kong-migration
        image: kong
        env:
          - name: KONG_NGINX_DAEMON
            value: 'off'
          - name: KONG_DATABASE
            value: postgres
          - name: KONG_PG_USER
            value: kong
          - name: KONG_PG_DATABASE
            value: kong
          - name: KONG_PG_PASSWORD
            value: kong
          - name: KONG_PG_HOST
            value: postgres.kong-system.svc.cluster.local
        command: [ "/bin/sh", "-c", "kong migrations up" ]
      restartPolicy: Never

Then run the following command:

$ kubectl create -f kong-migration.yaml

Delete the kong pod with error:

$ kubectl -n kong-system get po -w
NAME                          READY     STATUS    RESTARTS   AGE
kong-1026425068-zrvnw         0/1       Error     0          24m
kong-system-127110217-zng68   1/1       Running   0          24m
postgres-626d8                1/1       Running   0          24m
$ kubectl -n kong-system delete po kong-1026425068-zrvnw

You should be able to see everything working now:

$ kubectl -n kong-system get po -w
NAME                          READY     STATUS    RESTARTS   AGE
kong-1026425068-bk31k         1/1       Running   0          5s
kong-system-127110217-zng68   1/1       Running   0          24m
postgres-626d8                1/1       Running   0          24m

Once everything is running you can delete de migration job:

$ kubectl delete -f kong-migration.yaml

UPDATE

Added instructions here

remster commented 6 years ago

I've run these instructions i think successfully, my kong now seemingly does http2:

{
  "configuration": {
    "admin_access_log": "logs/admin_access.log",
    "admin_error_log": "logs/error.log",
    "admin_http2": false,
    "admin_ip": "0.0.0.0",
    "admin_listen": "0.0.0.0:8001",
    "admin_listen_ssl": "0.0.0.0:8444",
    "admin_port": 8001,
    "admin_ssl": true,
    "admin_ssl_cert": "/usr/local/kong/ssl/admin-kong-default.crt",
    "admin_ssl_cert_csr_default": "/usr/local/kong/ssl/admin-kong-default.csr",
    "admin_ssl_cert_default": "/usr/local/kong/ssl/admin-kong-default.crt",
    "admin_ssl_cert_key": "/usr/local/kong/ssl/admin-kong-default.key",
    "admin_ssl_cert_key_default": "/usr/local/kong/ssl/admin-kong-default.key",
    "admin_ssl_ip": "0.0.0.0",
    "admin_ssl_port": 8444,
    "anonymous_reports": true,
    "cassandra_consistency": "ONE",
    "cassandra_contact_points": [
      "127.0.0.1"
    ],
    "cassandra_data_centers": [
      "dc1:2",
      "dc2:3"
    ],
    "cassandra_keyspace": "kong",
    "cassandra_lb_policy": "RoundRobin",
    "cassandra_port": 9042,
    "cassandra_repl_factor": 1,
    "cassandra_repl_strategy": "SimpleStrategy",
    "cassandra_schema_consensus_timeout": 10000,
    "cassandra_ssl": false,
    "cassandra_ssl_verify": false,
    "cassandra_timeout": 5000,
    "cassandra_username": "kong",
    "client_body_buffer_size": "8k",
    "client_max_body_size": "0",
    "client_ssl": false,
    "client_ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
    "client_ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
    "client_ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
    "custom_plugins": {},
    "database": "postgres",
    "db_cache_ttl": 3600,
    "db_update_frequency": 5,
    "db_update_propagation": 0,
    "dns_error_ttl": 1,
    "dns_hostsfile": "/etc/hosts",
    "dns_no_sync": false,
    "dns_not_found_ttl": 30,
    "dns_order": [
      "LAST",
      "SRV",
      "A",
      "CNAME"
    ],
    "dns_resolver": [
      "10.0.0.10"
    ],
    "dns_stale_ttl": 4,
    "error_default_type": "text/plain",
    "http2": true,
    "kong_env": "/usr/local/kong/.kong_env",
    "latency_tokens": true,
    "log_level": "info",
    "lua_code_cache": "on",
    "lua_package_cpath": "",
    "lua_package_path": "?/init.lua;./kong/?.lua",
    "lua_socket_pool_size": 30,
    "lua_ssl_verify_depth": 1,
    "mem_cache_size": "128m",
    "nginx_acc_logs": "/usr/local/kong/logs/access.log",
    "nginx_admin_acc_logs": "/usr/local/kong/logs/admin_access.log",
    "nginx_conf": "/usr/local/kong/nginx.conf",
    "nginx_daemon": "off",
    "nginx_err_logs": "/usr/local/kong/logs/error.log",
    "nginx_kong_conf": "/usr/local/kong/nginx-kong.conf",
    "nginx_optimizations": true,
    "nginx_pid": "/usr/local/kong/pids/nginx.pid",
    "nginx_worker_processes": "auto",
    "pg_database": "kong",
    "pg_host": "postgres.kong-system.svc.cluster.local",
    "pg_password": "******",
    "pg_port": 5432,
    "pg_ssl": false,
    "pg_ssl_verify": false,
    "pg_user": "kong",
    "plugins": {
      "acl": true,
      "aws-lambda": true,
      "basic-auth": true,
      "bot-detection": true,
      "correlation-id": true,
      "cors": true,
      "datadog": true,
      "file-log": true,
      "galileo": true,
      "hmac-auth": true,
      "http-log": true,
      "ip-restriction": true,
      "jwt": true,
      "key-auth": true,
      "ldap-auth": true,
      "loggly": true,
      "oauth2": true,
      "rate-limiting": true,
      "request-size-limiting": true,
      "request-termination": true,
      "request-transformer": true,
      "response-ratelimiting": true,
      "response-transformer": true,
      "runscope": true,
      "statsd": true,
      "syslog": true,
      "tcp-log": true,
      "udp-log": true
    },
    "prefix": "/usr/local/kong",
    "proxy_access_log": "logs/access.log",
    "proxy_error_log": "logs/error.log",
    "proxy_ip": "0.0.0.0",
    "proxy_listen": "0.0.0.0:8000",
    "proxy_listen_ssl": "0.0.0.0:8443",
    "proxy_port": 8000,
    "proxy_ssl_ip": "0.0.0.0",
    "proxy_ssl_port": 8443,
    "real_ip_header": "X-Real-IP",
    "real_ip_recursive": "off",
    "server_tokens": true,
    "ssl": true,
    "ssl_cert": "/usr/local/kong/ssl/kong-default.crt",
    "ssl_cert_csr_default": "/usr/local/kong/ssl/kong-default.csr",
    "ssl_cert_default": "/usr/local/kong/ssl/kong-default.crt",
    "ssl_cert_key": "/usr/local/kong/ssl/kong-default.key",
    "ssl_cert_key_default": "/usr/local/kong/ssl/kong-default.key",
    "ssl_cipher_suite": "modern",
    "ssl_ciphers": "ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256",
    "trusted_ips": {},
    "upstream_keepalive": 60
  },
  "hostname": "kong-3498955395-1hsr0",
  "lua_version": "LuaJIT 2.1.0-beta2",
  "plugins": {
    "available_on_server": {
      "acl": true,
      "aws-lambda": true,
      "basic-auth": true,
      "bot-detection": true,
      "correlation-id": true,
      "cors": true,
      "datadog": true,
      "file-log": true,
      "galileo": true,
      "hmac-auth": true,
      "http-log": true,
      "ip-restriction": true,
      "jwt": true,
      "key-auth": true,
      "ldap-auth": true,
      "loggly": true,
      "oauth2": true,
      "rate-limiting": true,
      "request-size-limiting": true,
      "request-termination": true,
      "request-transformer": true,
      "response-ratelimiting": true,
      "response-transformer": true,
      "runscope": true,
      "statsd": true,
      "syslog": true,
      "tcp-log": true,
      "udp-log": true
    },
    "enabled_in_cluster": {}
  },
  "prng_seeds": {
    "pid: 47": 204719414651,
    "pid: 48": 731191861371
  },
  "tagline": "Welcome to kong",
  "timers": {
    "pending": 5,
    "running": 0
  },
  "version": "0.11.0"
}

My ingress resource points at the grpc server:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kong-echo-routes
  annotations:
    kolihub.io/airmap.k8s: primary
    ingress.kubernetes.io/ssl-passthrough: "true"
spec:
  rules:
  - host: airmap.k8s
    http:
      paths:
      - path: /
        backend:
          serviceName: echo-service-target-port-pyreneyes
          servicePort: 9911

and the client complains, as it did before:

establishing connection to airmap.k8s:8000
error => { Error: Trying to connect an http1.x server

I tried exposing the 8433 port (ssl-proxy) instead of 8000 and i see what looking like a cert problem at the client side and my tls expertise is running thin from then on. Kong doesn't seem to sport any ssl certs:

$ curl 172.17.0.7:8001/certificates                      
{"data":[],"total":0}

I did this because of this section

http2
Enables HTTP2 support for HTTPS traffic on the proxy_listen_ssl address.
Default: off
sandromello commented 6 years ago

I think you need to add a certificate specifying a SNI in Kong, because http2 it's only enabled for https traffic. I can tell you in advance that http2 it's working properly in our environment. Try to do a small Proof of Concept of what you're trying to accomplish with Kong, then you will understand the basics and move on to a more automated fashion solution.

Let's encrypt it's a good solution if you want to test this kind of scenario. Starting with a docker installation it's very easy to setup a new kong environment

remster commented 6 years ago

Ploughing further, I generated (self-signed) and added certs for my domain/sni to kong with https://getkong.org/docs/0.11.x/admin-api/#add-certificate

my grpc client now says: certificate verify failed.

A small proof of concept is what i've been doing all along. thanks for the ongoing support. Running out of ideas..

sandromello commented 6 years ago

If you're using a self signed cert you need to use those in your client. I think the CA it's enough for establishing a connection without error. Kubernetes uses self-signed certs, here's how it uses a In-Cluster configuration (when a client runs inside a POD) using the certificate Root CA

remster commented 6 years ago

Yes, i figured that much (needing to use the .crt file in the client). This is where it fails to verify it.

sandromello commented 6 years ago

The CA is the certificate which you will need to use in your client. Kubernetes uses self-signed certificates, each node needs to communicate with the api-server in a secure manner, here's the documentation in how to generate those certificates on which the client (nodes/kubelet) will be using to contact the api-server.

This guide may help you to troubleshoot if you're using the proper certificates

remster commented 6 years ago

Not sure what you mean by the api-server. My impression has been that it's the kong-ingress-controller that (I am trying to set up) would terminate ssl - in other words, I cannot reason how kubernetes has got any bussiness with certs and can only reason how can kong.

BTW: what does that mean: E1003 13:29:22.483390 1 utils.go:96] Requeuing[8] default/kong-echo-routes, err: failed claiming domain http.airmap.k8s, check its state! I1003 13:29:22.483587 1 event.go:217] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"kong-echo-routes", UID:"adc1274a-a83e-11e7-82d0-08002790692e", APIVersion:"extensions", ResourceVersion:"1026", FieldPath:""}): type: 'Warning' reason: 'DomainNotFound' The domain 'http.airmap.k8s' wasn't claimed, check its state

my domain is in my /etc/hosts file. my ingress resource is as follows:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kong-echo-routes
  annotations:
    kolihub.io/airmap.k8s: primary
spec:
  tls:
    - hosts:
      - http.airmap.k8s
      - grpc.airmap.k8s
      secretName: kong-tls-cert
  rules:
  - host: grpc.airmap.k8s
    http:
      paths:
      - path: /
        backend:
          serviceName: echo-service-target-port-pyreneyes
          servicePort: 9911
  - host: http.airmap.k8s
    http:
      paths:
      - path: /
        backend:
          serviceName: echo-service-target-port-pyreneyes
          servicePort: 2000

it seems annotations: kolihub.io/airmap.k8s: primary has a meaning, but i can't work out what it can be.

remster commented 6 years ago

I also can't connect to ssl-proxy, my services are:

Name:           kong-admin-proxy
Namespace:      default
Labels:         app=kong
            name=kong
Annotations:        <none>
Selector:       app=kong,name=kong
Type:           ClusterIP
IP:         10.0.0.26
External IPs:       192.168.99.100
Port:           <unset> 8001/TCP
Endpoints:      172.17.0.9:8001
Session Affinity:   None
Events:         <none>

Name:           kong-proxy
Namespace:      default
Labels:         app=kong
            name=kong
Annotations:        <none>
Selector:       app=kong,name=kong
Type:           ClusterIP
IP:         10.0.0.185
External IPs:       192.168.99.100
Port:           <unset> 8000/TCP
Endpoints:      172.17.0.9:8000
Session Affinity:   None
Events:         <none>

Name:           kong-ssl-proxy
Namespace:      default
Labels:         app=kong
            name=kong
Annotations:        <none>
Selector:       app=kong,name=kong
Type:           ClusterIP
IP:         10.0.0.100
External IPs:       192.168.99.100
Port:           <unset> 8433/TCP
Endpoints:      172.17.0.9:8433
Session Affinity:   None
Events:         <none>

8000 is reachable and so is 8001, but not 8433

sandromello commented 6 years ago

If your define your host in ingress as http.airmap.k8s you need to set an annotation for kolihub.io/http.airmap.k8s: primary not kolihub.io/airmap.k8s: primary. You could read more about Auto Claim Mode.

The tls option it's not implemented, the ingress controller only creates routes and control the lease of domains. We plan to support other kind of features in the future.

remster commented 6 years ago

when u say 'tls option is not implemented' - does that mean that i don't stand a chance of opening an ssl socket against the kong-ssl-proxy until it is? and when i say kong-ssl-proxy i mean this:

kubectl expose deployment kong --name kong-ssl-proxy --external-ip=$(minikube ip) --port 8433 --target-port 8433

would that also mean that there is no point playing with certificates?

sandromello commented 6 years ago

This project only manages routes and control the lease of domains yet. If you want to use other kind of features of Kong you need to do this by yourself (configuring through the kong admin API).

remster commented 6 years ago

I understand that @sandromello and i am grateful for your ongoing help. I just find it all quite opaque and difficult to inspect. Which i am sure is down to me only starting with kuberntes.

Can u explain what you mean by "tls option is not implemented"? I need to somehow work out why the 8433 port isn't exposed of off my minikube - is it because i don't expose it correctly or is it because kong (in its pod) isn't even listening.

sandromello commented 6 years ago

I think you need to go from baby steps:

References:

Your question goes beyond the scope of this project, you need to understand those concepts to understand how it could help you.

remster commented 6 years ago

Well.. that's exactly what i've been doing. And in the same chronology. Thanks.

sandromello commented 6 years ago

I'll close this issue for now, you could contact me at slack in #kubernetes-user channel as @sandro if you're still having problems