Closed tirelibirefe closed 1 year ago
Strange, EOF indicates a network error.
Can you share Kibana logs if there are any?
What happens if you SSH into a Gateway pod and try to cURL the Kibana endpoint? You may also deploy a busybox or alternative network debug container into your cluster which will have more commands installed than the Gateway's runtime.
Thanks for your fast feedback @sedkis I attached Kibana logs. When I attempt to connect Kibana svc from inside of K8s, here is the result:
bash-5.1# curl -k https://kibana-kb-http.eck.svc.cluster.local:5601 -v
* Trying 10.100.21.248:5601...
* Connected to kibana-kb-http.eck.svc.cluster.local (10.100.21.248) port 5601 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=kibana; CN=kibana-kb-http.eck.kb.local
* start date: Oct 12 15:47:25 2021 GMT
* expire date: Oct 12 15:57:25 2022 GMT
* issuer: OU=kibana; CN=kibana-http
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> GET / HTTP/1.1
> Host: kibana-kb-http.eck.svc.cluster.local:5601
> User-Agent: curl/7.78.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< location: /login?next=%2F
< x-content-type-options: nosniff
< referrer-policy: no-referrer-when-downgrade
< kbn-name: kibana
< kbn-license-sig: 5c7f5bad0c1b4fe8d1a64d6fa4f2c79ad94fd5eff96258464dd653e10b802151
< cache-control: private, no-cache, no-store, must-revalidate
< content-length: 0
< Date: Thu, 21 Oct 2021 20:36:53 GMT
< Connection: keep-alive
< Keep-Alive: timeout=120
<
* Connection #0 to host kibana-kb-http.eck.svc.cluster.local left intact
bash-5.1# curl -k https://kibana-kb-http.eck.svc.cluster.local:5601 -v
* Trying 10.100.21.248:5601...
* Connected to kibana-kb-http.eck.svc.cluster.local (10.100.21.248) port 5601 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=kibana; CN=kibana-kb-http.eck.kb.local
* start date: Oct 12 15:47:25 2021 GMT
* expire date: Oct 12 15:57:25 2022 GMT
* issuer: OU=kibana; CN=kibana-http
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> GET / HTTP/1.1
> Host: kibana-kb-http.eck.svc.cluster.local:5601
> User-Agent: curl/7.78.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
bash-5.1# curl -k 'Authorization:8x7V1uqmP4Hop20g0Y1Dj72b' https://kibana-kb-http.eck.svc.cluster.local:5601 -v
* Closing connection -1
curl: (3) URL using bad/illegal format or missing URL
* Trying 10.100.21.248:5601...
* Connected to kibana-kb-http.eck.svc.cluster.local (10.100.21.248) port 5601 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
* CApath: none
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use http/1.1
* Server certificate:
* subject: OU=kibana; CN=kibana-kb-http.eck.kb.local
* start date: Oct 12 15:47:25 2021 GMT
* expire date: Oct 12 15:57:25 2022 GMT
* issuer: OU=kibana; CN=kibana-http
* SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway.
> GET / HTTP/1.1
> Host: kibana-kb-http.eck.svc.cluster.local:5601
> User-Agent: curl/7.78.0
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Mark bundle as not supporting multiuse
< HTTP/1.1 302 Found
< location: /login?next=%2F
< x-content-type-options: nosniff
< referrer-policy: no-referrer-when-downgrade
< kbn-name: kibana
< kbn-license-sig: 5c7f5bad0c1b4fe8d1a64d6fa4f2c79ad94fd5eff96258464dd653e10b802151
< cache-control: private, no-cache, no-store, must-revalidate
< content-length: 0
< Date: Thu, 21 Oct 2021 20:44:04 GMT
< Connection: keep-alive
< Keep-Alive: timeout=120
<
* Connection #0 to host kibana-kb-http.eck.svc.cluster.local left intact
As far as I understand, as SSL is terminated by ALB, the connection is http but Kibana service expects/listens https even if the address of Kibana is defined as https in apidefinition file.
Something strange going on. I would expect you to receive HTML in your curl from the Kibana server which would be the front end app.
There is no any problems related with accessing https svc from inside of k8s.
Because of auth mechanism of Kibana, we get 302 instead of 200; not so important; here is another example which kubernetes dashboard:
bash-5.1# curl -k -H 'Authorization: fc7e976d130casdfasdfasdblablabla' https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png">
<meta name="viewport" content="width=device-width">
<style>body,html{height:100%;margin:0;}</style><link rel="stylesheet" href="styles.f66c655a05a456ae30f8.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.f66c655a05a456ae30f8.css"></noscript></head>
<body>
<kd-root></kd-root>
<script src="runtime.fb7fb9bb628f2208f9e9.js" defer></script><script src="polyfills.49b2d5227916caf47237.js" defer></script><script src="scripts.72d8a72221658f3278d3.js" defer></script><script src="en.main.0bf75cd6c71fc0efa001.js" defer></script>
ingress and apidefinition
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: tyk
tyk.io/template: dashboard
spec:
rules:
- host: mysubdomain.mydomain.com
http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
---
apiVersion: tyk.tyk.io/v1alpha1
kind: ApiDefinition
metadata:
name: dashboard
namespace: kubernetes-dashboard
labels:
template: "true"
spec:
name: dashboard-basit1
protocol: http
listen_port: 80
use_keyless: true
active: true
proxy:
target_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
listen_path: /dashboard
strip_listen_path: true
version_data:
default_version: Default
not_versioned: true
versions:
Default:
name: Default
paths:
black_list: []
ignored: []
white_list: []
global_headers_remove:
- Authorization
global_headers:
Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6blablablabla
curl https://mysubdomain.mydomain.com/dashboard
Client sent an HTTP request to an HTTPS server.
logs are here:
time="Oct 22 07:06:16" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=VersionCheck org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634886376864370997
time="Oct 22 07:06:16" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=VersionCheck ns=50189 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:06:16" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=RateCheckMW org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634886376864442499
time="Oct 22 07:06:16" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=RateCheckMW ns=23168 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:06:16" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634886376864509526
time="Oct 22 07:06:16" level=debug msg="Removing: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:06:16" level=debug msg="Adding: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:06:16" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=TransformHeaders ns=33052 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:06:16" level=debug msg="Started proxy"
time="Oct 22 07:06:16" level=debug msg="Stripping: /dashboard"
time="Oct 22 07:06:16" level=debug msg="Upstream Path is: "
time="Oct 22 07:06:16" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku ts=1634886376864576138
time="Oct 22 07:06:16" level=debug msg="Upstream request URL: " api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:06:16" level=debug msg="Outbound request URL: http://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:06:16" level=error msg="http: proxy error during body copy: read tcp 192.168.27.153:47104->10.100.155.38:443: read: connection reset by peer" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku prefix=proxy
time="Oct 22 07:06:16" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy ns=11768808 org_id=ku
time="Oct 22 07:06:16" level=debug msg="Upstream request took (ms): 11.782367"
time="Oct 22 07:06:16" level=debug msg="Done proxy"
time="Oct 22 07:06:21" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
Hello @tirelibirefe , can you please change your ApiDefinition proxy part to look like below and see if that helps to solve the issue.
proxy:
target_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
listen_path: /dashboard
strip_listen_path: true
preserve_host_header: true
transport:
proxy_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
ssl_insecure_skip_verify: true
ssl_force_common_name_check: false
@cherrymu that was a very good progress for weeks. Almost done but there is still something missing. Here's what changed:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: tyk
tyk.io/template: dashboard
spec:
rules:
- host: mysubdomain.mydomain.com
http:
paths:
- path: /dashboard
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 443
---
apiVersion: tyk.tyk.io/v1alpha1
kind: ApiDefinition
metadata:
name: dashboard
namespace: kubernetes-dashboard
labels:
template: "true"
spec:
name: dashboard-basit1
protocol: http
listen_port: 80
use_keyless: true
active: true
proxy:
target_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
listen_path: /dashboard
strip_listen_path: true
preserve_host_header: true
transport:
proxy_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
ssl_insecure_skip_verify: true
ssl_force_common_name_check: false
version_data:
default_version: Default
not_versioned: true
versions:
Default:
name: Default
paths:
black_list: []
ignored: []
white_list: []
global_headers_remove:
- Authorization
global_headers:
Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6InNlaWJRUkVqUHFuaHblablablablablabla
curl
curl https://mysubdomain.mydomain.com/dashboard
<!--
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
--><!DOCTYPE html><html lang="en"><head>
<meta charset="utf-8">
<title>Kubernetes Dashboard</title>
<link rel="icon" type="image/png" href="assets/images/kubernetes-logo.png">
<meta name="viewport" content="width=device-width">
<style>body,html{height:100%;margin:0;}</style><link rel="stylesheet" href="styles.f66c655a05a456ae30f8.css" media="print" onload="this.media='all'"><noscript><link rel="stylesheet" href="styles.f66c655a05a456ae30f8.css"></noscript></head>
<body>
<kd-root></kd-root>
<script src="runtime.fb7fb9bb628f2208f9e9.js" defer></script><script src="polyfills.49b2d5227916caf47237.js" defer></script><script src="scripts.72d8a72221658f3278d3.js" defer></script><script src="en.main.0bf75cd6c71fc0efa001.js" defer></script>
</body></html>
gw logs
time="Oct 22 07:55:31" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 07:55:41" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 07:55:48" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=VersionCheck org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634889348157181454
time="Oct 22 07:55:48" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=VersionCheck ns=81527 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:55:48" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=RateCheckMW org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634889348157295415
time="Oct 22 07:55:48" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=RateCheckMW ns=19134 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:55:48" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard" ts=1634889348157365480
time="Oct 22 07:55:48" level=debug msg="Removing: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:55:48" level=debug msg="Adding: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:55:48" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=TransformHeaders ns=36821 org_id=ku origin=88.255.99.51 path="/dashboard"
time="Oct 22 07:55:48" level=debug msg="Started proxy"
time="Oct 22 07:55:48" level=debug msg="Stripping: /dashboard"
time="Oct 22 07:55:48" level=debug msg="Upstream Path is: "
time="Oct 22 07:55:48" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku ts=1634889348157434782
time="Oct 22 07:55:48" level=debug msg="Detected proxy: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:55:48" level=debug msg="Creating new transport" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:55:48" level=debug msg="Upstream request URL: " api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:55:48" level=debug msg="Outbound request URL: http://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 07:55:48" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy ns=36140391 org_id=ku
time="Oct 22 07:55:48" level=debug msg="Upstream request took (ms): 36.178934"
time="Oct 22 07:55:48" level=debug msg="Done proxy"
time="Oct 22 07:55:51" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 07:56:01" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 07:56:11" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 07:56:18" level=debug msg=Started api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=VersionCheck org_id=ku origin=88.255.99.51 path="/dasboard" ts=1634889378265616781
time="Oct 22 07:56:18" level=debug msg=Finished api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 code=200 mw=VersionCheck ns=45201 org_id=ku origin=88.255.99.51 path="/dasboard"
time="Oct 22 07:56:18" level=debug msg=Started api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=RateCheckMW org_id=ku origin=88.255.99.51 path="/dasboard" ts=1634889378265678721
time="Oct 22 07:56:18" level=debug msg=Finished api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 code=200 mw=RateCheckMW ns=15326 org_id=ku origin=88.255.99.51 path="/dasboard"
time="Oct 22 07:56:18" level=debug msg="Started proxy"
time="Oct 22 07:56:18" level=debug msg="Stripping: /"
time="Oct 22 07:56:18" level=debug msg="Upstream Path is: dasboard"
time="Oct 22 07:56:18" level=debug msg=Started api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=ReverseProxy org_id=ku ts=1634889378265722777
time="Oct 22 07:56:18" level=debug msg="Creating new transport" api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=ReverseProxy org_id=ku
time="Oct 22 07:56:18" level=debug msg="Upstream request URL: dasboard" api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=ReverseProxy org_id=ku
time="Oct 22 07:56:18" level=debug msg="Outbound request URL: http://nginxhello-default.demo-tyk.svc.cluster.local:80/dasboard" api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=ReverseProxy org_id=ku
time="Oct 22 07:56:18" level=debug msg=Finished api_id=ZGVtby10eWsvZGVtby10eWstcmVkLWluZ3Jlc3MtMGE3MDc5Yjg5 api_name=demo-tyk-red-ingress-0a7079b89 mw=ReverseProxy ns=11392258 org_id=ku
time="Oct 22 07:56:18" level=debug msg="Upstream request took (ms): 11.429478"
time="Oct 22 07:56:18" level=debug msg="Done proxy"
time="Oct 22 07:56:21" level=debug msg="Primary ins
P.S red-ingress is default backend
Hello @tirelibirefe, that's good to see that gateway is processing the connection now and its weird that from your curl output I can see it responses back with kubernetes dashboard title page and when you hit the same url from browser it says not found which shouldn't be the case. So, you can check again whether you are hitting the right url:port from the browser which will route the traffic to the backend service.
what is the difference between them?
apidefinition.proxy.target_url
apidefinition.proxy.transport.proxy_url
Hello @tirelibirefe, that's good to see that gateway is processing the connection now and its weird that from your curl output I can see it responses back with kubernetes dashboard title page and when you hit the same url from browser it says not found which shouldn't be the case. So, you can check again whether you are hitting the right url:port from the browser which will route the traffic to the backend service.
browser 404 was a DNS issue, pls ignore 404, I fixed it.
The correct browser output is an "empty page".
logs are here:
pls help me understand >>> time="Oct 22 13:11:18" level=debug msg="Outbound request URL: http://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443"
where does http://... :443 come from? It is https://...:443 in yaml file.
time="Oct 22 13:11:11" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
time="Oct 22 13:11:18" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=VersionCheck org_id=ku origin=77.111.244.30 path="/dashboard" ts=1634908278018590919
time="Oct 22 13:11:18" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=VersionCheck ns=44606 org_id=ku origin=77.111.244.30 path="/dashboard"
time="Oct 22 13:11:18" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=RateCheckMW org_id=ku origin=77.111.244.30 path="/dashboard" ts=1634908278018655562
time="Oct 22 13:11:18" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=RateCheckMW ns=18534 org_id=ku origin=77.111.244.30 path="/dashboard"
time="Oct 22 13:11:18" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=77.111.244.30 path="/dashboard" ts=1634908278018714858
time="Oct 22 13:11:18" level=debug msg="Removing: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=77.111.244.30 path="/dashboard"
time="Oct 22 13:11:18" level=debug msg="Adding: Authorization" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=TransformHeaders org_id=ku origin=77.111.244.30 path="/dashboard"
time="Oct 22 13:11:18" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 code=200 mw=TransformHeaders ns=43677 org_id=ku origin=77.111.244.30 path="/dashboard"
time="Oct 22 13:11:18" level=debug msg="Started proxy"
time="Oct 22 13:11:18" level=debug msg="Stripping: /dashboard"
time="Oct 22 13:11:18" level=debug msg="Upstream Path is: "
time="Oct 22 13:11:18" level=debug msg=Started api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku ts=1634908278019442685
time="Oct 22 13:11:18" level=debug msg="Upstream request URL: " api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 13:11:18" level=debug msg="Outbound request URL: http://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443" api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy org_id=ku
time="Oct 22 13:11:18" level=debug msg=Finished api_id=a3ViZXJuZXRlcy1kYXNoYm9hcmQva3ViZXJuZXRlcy1kYXNoYm9hcmQtZGFzaGJvYXJkLWluZ3Jlc3MtMmYxNTJiNzQ5 api_name=kubernetes-dashboard-dashboard-ingress-2f152b749 mw=ReverseProxy ns=1323905 org_id=ku
time="Oct 22 13:11:18" level=debug msg="Upstream request took (ms): 1.348667"
time="Oct 22 13:11:18" level=debug msg="Done proxy"
time="Oct 22 13:11:21" level=debug msg="Primary instance set, I am master" prefix=host-check-mgr
apiVersion: tyk.tyk.io/v1alpha1
kind: ApiDefinition
metadata:
name: dashboard
namespace: kubernetes-dashboard
labels:
template: "true"
spec:
name: dashboard-basit1
protocol: http
listen_port: 80
use_keyless: true
active: true
proxy:
target_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
listen_path: /dashboard
strip_listen_path: true
preserve_host_header: true
transport:
proxy_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
ssl_insecure_skip_verify: true
ssl_force_common_name_check: false
version_data:
default_version: Default
not_versioned: true
versions:
Default:
name: Default
paths:
black_list: []
ignored: []
white_list: []
global_headers_remove:
- Authorization
global_headers:
Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6InNlaWJRUblablablablablablabla
This config worked;
cat << EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kibana-ingress
namespace: eck
annotations:
kubernetes.io/ingress.class: tyk
tyk.io/template: kibana
spec:
rules:
- host: kibana.mysubdomain.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-kb-http
port:
number: 5601
EOF
cat << EOF | kubectl apply -n eck -f -
apiVersion: tyk.tyk.io/v1alpha1
kind: ApiDefinition
metadata:
name: kibana
namespace: eck
labels:
template: "true"
spec:
name: kibana
protocol: http
listen_port: 80
use_keyless: true
active: true
proxy:
target_url: https://kibana-kb-http.eck.svc.cluster.local:5601
listen_path: /kibana
strip_listen_path: true
preserve_host_header: true
transport:
proxy_url: https://kibana-kb-http.eck.svc.cluster.local:5601
ssl_insecure_skip_verify: true
ssl_force_common_name_check: false
version_data:
default_version: Default
not_versioned: true
versions:
Default:
name: Default
paths:
black_list: []
ignored: []
white_list: []
EOF
...but Kubernetes Dashboard problem is still exist.
Kubernetes dashboard problem was fixed too. Here is the working config:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: tyk
tyk.io/template: dashboard
spec:
rules:
- host: kd.mysubdomain.mydomain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard
port:
number: 80
---
apiVersion: tyk.tyk.io/v1alpha1
kind: ApiDefinition
metadata:
name: dashboard
namespace: kubernetes-dashboard
labels:
template: "true"
spec:
name: dashboard-basit1
protocol: http
listen_port: 80
use_keyless: true
active: true
proxy:
target_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local
listen_path: /
strip_listen_path: true
preserve_host_header: true
transport:
proxy_url: https://kubernetes-dashboard.kubernetes-dashboard.svc.cluster.local:443
ssl_insecure_skip_verify: true
ssl_force_common_name_check: false
version_data:
default_version: Default
not_versioned: true
versions:
Default:
name: Default
paths:
black_list: []
ignored: []
white_list: []
global_headers_remove:
- Authorization
global_headers:
Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6InNlaWJRUkVqUblablablablablablabla
@tirelibirefe thanks for the updates.
What do you think were the gaps that lead to the back and forth? How can we improve the charts to make this process easier?
...but Although the authorization token is correct, K8s dashboard complains that the token is incorrect.
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 88.255.99.51:
2021/10/22 15:16:55 Getting application global configuration
2021/10/22 15:16:55 Application configuration {"serverTime":1634915815069}
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/settings/pinner request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/plugin/config request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/settings/global request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/systembanner request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/login/status request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 200 status code
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 88.255.99.51:
2021/10/22 15:16:55 [2021-10-22T15:16:55Z] Outcoming response to 88.255.99.51 with 401 status code
May Tyk change the token?
No, Tyk won't change the token.
Here's an example I set up (in JSON)
"global_headers": {
"Authorization": "foo"
},
"global_headers_remove": [
"Authorization"
],
When I send a request to httpbin echo server:
$ curl localhost:8080/httpbin/get -H "authorization: 123"
{
...
"Authorization": "foo",
...
}
Tyk replaces the incoming Authorization
value from the client of 123
to foo
which is set in Tyk
So it means that my definition is correct in yaml above?
...
global_headers_remove:
- Authorization
global_headers:
Authorization: eyJhbGciOiJSUzI1NiIsImtpZCI6InNlaWJRUkVqUblablablablablablabla
...
Yes. Can you point the target URL at http://httpbin.org
momentarily to see what the request looks like?
The page can be displayed normally. What does it mean?
sorry, I meant to ask if you can also add the /get
endpoint to your request.
that will echo back your request, and you can see if the Authorization
header is being set correctly by Tyk.
@sedkis can u login/access to dashboard via browser properly?
Hello, I've a tyk-ce gw and tyk operator in Kubernetes (AWS EKS) environment which was installed by using Helm chart. In my environment ssl termination is handled by LB level by ALB.
There are some https backend services like Kibana or Kubernetes dashboard in Kubernetes. These services must be exposed by tyk ingress.
The following apidefinition can be a reference.
Tyk cannot connect to HTTPS services in Kubernetes. I need to your advises. If there is someone who could have connected to the secure backends by the way, advises would be very helpful.
Thanks & Regards