Open evheniyt opened 2 months ago
Hi @evheniyt thanks for reporting!
Be sure to check out the docs and the Contributing Guidelines while you wait for a human to take a look at this :slightly_smiling_face:
Cheers!
@evheniyt we will try to reproduce the issue and will get back to you
@evheniyt what version of kind
are you using? Could you also share your kind
config?
➜ kubernetes-ingress git:(main) ✗ kind --version
kind version 0.24.0
I'm not using kind
.
Kubernetes version is 1.29
I'm not using
kind
. Kubernetes version is 1.29
What Kubernetes platforms are you running on?
Kind
ok, thanks
Self-hosted on Hetzner
@evheniyt wen you tested NIC v3.6.2, what version of NIC Helm chart were you using?
1.3.2
Hi @evheniyt
Based on your example
api-pod # curl https://api.example.com -v
* Host api.example.com:443 was resolved.
* IPv6: (none)
* IPv4: 10.110.34.14
* Trying 10.110.34.14:443...
* connect to 10.110.34.14 port 443 from 10.244.0.30 port 51694 failed: Connection refused
You are making a https
request from inside the cluster direct to the api.example.com/api.test-services.svc.cluster.local
Service. Given you are not routing the request via an Ingress
or VirtualServer
, do the pods handling these requests perform TLS?
hi @pdabelf5,
You are right, api.example.com resolves to Service of the application. The interesting part is that this application listens only on 80 port but I could make https requests and I see a response from nginx ingress controller. So looks like all my requests to the internal IP of the Service are going through the nginx controller. Not sure why exactly it works like this, maybe because of the hostNetwork: true
...
That behavior stopped working in 3.3.0.
Hi @evheniyt ,
Unfortunately I was not able to reproduce your case using v3.2.1
of NGINX Ingress Controller. However, this may be due differences in how Kubernetes networking is configured on my environment (I used EKS) and your baremetal setup in Hetzner.
Is your need to access https://api.example.com from within the cluster? If so I will try to find working example for you.
Version
3.6.2
What Kubernetes platforms are you running on?
Kind
What happened?
After updating from 3.2.1 to 3.3.0 (also tried with 3.6.2) we found that TLS offload stopped working for requests that are coming from inside the cluster.
Our coredns is configured to resolve some DNS like
api.example.com
tosvc.cluster.local
address. Like this:And that setup was working fine with 3.2 version of the controller, and we could successfully request https://api.example.com from inside the cluster.
After updating to a new version of the controller we found that that functionality stopped working (for both
Ingress
andVirtualServer
). At the same time, HTTPS requests outside the cluster works fine. Also, HTTP requests work fine inside the cluster, but HTTPS - doesn't.We are installing controller with helm chart and this values:
The only thing we have added while updating from 0.18.1 chart to 1.0.0 is
hostNetwork: true
without which ingress wasn't working at all.Steps to reproduce
No response
Expected behaviour
No response
Kubectl Describe output
No response
Log output
No response
Contributing Guidelines