Closed Rainfarm closed 4 months ago
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Your nodes are on AWS and your curl destination is hostname localhost
so nothing can be valid about that curl.
But bigger problem is that the service created by the ingress-nginx controller is in a pending
state. So there is no question at all about even sending or receiving a HTTP/HTTPS request.
If streaming is broken, it can even be reproduced on a kind cluster or a minikube cluster.
So please check the documentation about how to install run and use the ingress-nginx controller. Then try it on a kind cluster or a minikube cluster. Once you have it all figured out, then run the install with appropriate and preferably documented process. THen please edit this issue description and provide data that can be analyzed as a problem in the controller.
Thanks
/remove-kind bug /kind support /triage needs-information
Your nodes are on AWS and your curl destination is hostname
localhost
so nothing can be valid about that curl.But bigger problem is that the service created by the ingress-nginx controller is in a
pending
state. So there is no question at all about even sending or receiving a HTTP/HTTPS request.If streaming is broken, it can even be reproduced on a kind cluster or a minikube cluster.
So please check the documentation about how to install run and use the ingress-nginx controller. Then try it on a kind cluster or a minikube cluster. Once you have it all figured out, then run the install with appropriate and preferably documented process. THen please edit this issue description and provide data that can be analyzed as a problem in the controller.
Thanks
/remove-kind bug /kind support /triage needs-information
Some clarificatons: As I've mentioned in the question:
Note: the ingress svc has been port-forwarded to local using: kubectl -n ingress-nginx port-forward svc/ingress-nginx-controller 8000:80.
That's why I can use localhost:8000 as the curl destination. It is valid.
The reason of testing using port-forwarding is that our service is exposed to Internet via CloudFlare Tunnel solution (and it's also the reason that the EXTERNAL-IP is in <pending>
status). I tried to use port-forward to exclude the possible impacts by the [CloudFlare Tunnel].
The traffic path is:
Internet => [CloudFlare Tunnel] => [nginx ingress controller] => [application service] => [application pod]
The test scenarios are:
Thanks!
I think you are providing info that helps but I don't know how to use it to reproduce the problem on a kind or a minikube cluster.
Would you consider this https://github.com/kubernetes/ingress-nginx/issues/11162#issuecomment-2019448596 as a valid text streaming test.
If you are port-forwarding, then is it across the internet or within a lan. All such details are needed for me to reproduce.
Critical info is a application docker image, of a small streaming server, that anyone can use on their own cluster to test.
You are not providing the log messages of the controller pod that is logged when streaming fails.
Maybe you should edit the issue description and ensure that there is enough info there that shows the small tiny details that are outputs of kubectl commands for describe and logs etc etc
Since you showed chatGPT, I will pick some random app from artifacthub.io to test, unless you can provide a minimalistic app
unable to find a app to use in test
I enabled debug level log in nginx ingress controller, made a test and grabbed the log. Since there're quite a lot logs generated at debug level, there might be some other activities logged.
Please check the log attached: test.log.tar.gz. Here're some highlights:
POST /api/v1/llm/chat_stream HTTP/1.1
No error is logged during the test. What we can see from the client side is that the whole content of the response was received in one go, without streaming effect.
I'll try to come up with a text streaming test that is reproducible in local environment.
Thanks!
Without knowing the app and the curl output of real use, its hard to understand why you think streaming is broken
@Rainfarm How did you solve the problem?
What happened:
We use Ngnix ingress controller in an EKS cluster, and the text streaming from the services running in the EKS cluster doesn't work, the client always get a response in one go. We've checked similar issues here (e.g., https://github.com/kubernetes/ingress-nginx/issues/10482, but the solutions suggested don't help).
Below are some details:
In the print out from curl command, we can see that the response has header of
transfer-encoding: chunked
:What you expected to happen: The client should receive response in stream.
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.9.6
Kubernetes version (use
kubectl version
): Client Version: v1.28.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.4-eks-036c24b Environment:Cloud provider or hardware configuration: AWS EKS
OS (e.g. from /etc/os-release): Amazon Linux 2
Kernel (e.g.
uname -a
): 5.10.205-195.807.amzn2.x86_64Install tools:
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamespace> get values <helmreleasename>
Current State of the controller:
kubectl describe ingressclasses
Name: nginx Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.6 helm.sh/chart=ingress-nginx-4.9.1 Annotations: meta.helm.sh/release-name: ingress-nginx meta.helm.sh/release-namespace: ingress-nginx Controller: k8s.io/ingress-nginx Events:
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/ingress-nginx-controller-7fdbfcb8f9-l7t92 1/1 Running 0 120d 100.72.15.115 ip-100-72-15-134.eu-west-1.compute.internal
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR service/ingress-nginx-controller LoadBalancer 10.100.116.31 80:31909/TCP,443:30721/TCP 120d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
service/ingress-nginx-controller-admission ClusterIP 10.100.25.196 443/TCP 120d app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR deployment.apps/ingress-nginx-controller 1/1 1 1 120d controller registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR replicaset.apps/ingress-nginx-controller-7fdbfcb8f9 1 1 1 120d controller registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7fdbfcb8f9
Name: ingress-nginx-controller-7fdbfcb8f9-l7t92 Namespace: ingress-nginx Priority: 0 Service Account: ingress-nginx Node: ip-100-72-15-134.eu-west-1.compute.internal/100.72.15.134 Start Time: Mon, 05 Feb 2024 21:33:12 +0100 Labels: app.kubernetes.io/component=controller app.kubernetes.io/instance=ingress-nginx app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=ingress-nginx app.kubernetes.io/part-of=ingress-nginx app.kubernetes.io/version=1.9.6 helm.sh/chart=ingress-nginx-4.9.1 pod-template-hash=7fdbfcb8f9 Annotations:
Status: Running
IP: 100.72.15.115
IPs:
IP: 100.72.15.115
Controlled By: ReplicaSet/ingress-nginx-controller-7fdbfcb8f9
Containers:
controller:
Container ID: containerd://18597a97709a1fb027c68a203a89075bb1922727795c63bdcbccb42031a9d133
Image: registry.k8s.io/ingress-nginx/controller:v1.9.6@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
Image ID: registry.k8s.io/ingress-nginx/controller@sha256:1405cc613bd95b2c6edd8b2a152510ae91c7e62aea4698500d23b2145960ab9c
Ports: 80/TCP, 443/TCP, 8443/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
Args:
/nginx-ingress-controller
--publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
--election-id=ingress-nginx-leader
--controller-class=k8s.io/ingress-nginx
--ingress-class=nginx
--configmap=$(POD_NAMESPACE)/ingress-nginx-controller
--validating-webhook=:8443
--validating-webhook-certificate=/usr/local/certificates/cert
--validating-webhook-key=/usr/local/certificates/key
State: Running
Started: Mon, 05 Feb 2024 21:33:28 +0100
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 90Mi
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
Readiness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Environment:
POD_NAME: ingress-nginx-controller-7fdbfcb8f9-l7t92 (v1:metadata.name)
POD_NAMESPACE: ingress-nginx (v1:metadata.namespace)
LD_PRELOAD: /usr/local/lib/libmimalloc.so
Mounts:
/usr/local/certificates/ from webhook-cert (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gfhx9 (ro)
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
webhook-cert:
Type: Secret (a volume populated by a Secret)
SecretName: ingress-nginx-admission
Optional: false
kube-api-access-gfhx9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional:
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ingress-nginx-controller LoadBalancer 10.100.116.31 80:31909/TCP,443:30721/TCP 120d
ingress-nginx-controller-admission ClusterIP 10.100.25.196 443/TCP 120d
% kubectl -n ingress-nginx describe svc ingress-nginx-controller
Name: ingress-nginx-controller
Namespace: ingress-nginx
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
app.kubernetes.io/version=1.9.6
helm.sh/chart=ingress-nginx-4.9.1
Annotations: meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.116.31
IPs: 10.100.116.31
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 31909/TCP
Endpoints: 100.72.15.115:80
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 30721/TCP
Endpoints: 100.72.15.115:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
Normal EnsuringLoadBalancer 4m13s (x38 over 164m) service-controller Ensuring load balancer
Name: appgateway-ingress Labels: app=appgateway app.kubernetes.io/managed-by=Helm Namespace: app Address: Ingress Class: nginx Default backend:
Rules:
Host Path Backends