Open a-y-klimovich opened 5 years ago
@a-y-klimovich we need to recreate this on GCP. For our internal environments we have not hit this nor did we hit it on Amazon's Kubernetes environment. Can you confirm if you did a multiple or full build. Cheers!
Can you confirm if you did a multiple or full build I did a full build
@a-y-klimovich Google's ingress is actually configuring an external loadbalancer. The information in the ingress yaml file will need to be modified. Here's one that works with kubernetes-ingress and nginx-ingress on-prem:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/server-snippet: |
gzip off;
name: sas-viya-visuals-ingress
namespace: default
spec:
rules:
- host: somedomain.example
http:
paths:
- backend:
serviceName: sas-viya-httpproxy
servicePort: 80
# tls:
# - hosts:
# - sas-viya.oldver.pdtk8s.sas.com
# secretName: @REPLACE_ME_WITH_YOUR_CERT@
Here's a modified example for GKE:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/server-snippet: |
gzip off;
name: sas-viya-visuals-ingress
namespace: default
spec:
backend:
serviceName: sas-viya-httpproxy
servicePort: 80
# tls:
# - hosts:
# - sas-viya.oldver.pdtk8s.sas.com
# secretName: @REPLACE_ME_WITH_YOUR_CERT@
See https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer for more information.
I found it did take several minutes before the backend was reported as responding. They highlight that in the documentation also.
@a-y-klimovich My previous comment addresses the ingress change, there is a change in the services/httpproxy.yml file also. As you found, this ingress/loadbalance requires nodeport, since it is outside of the cluster.
app: sas-viya-httpproxy
type: NodePort
ports:
The insertion of the "type:NodePort" line opens up the necessary ports.
@a-y-klimovich, was the information from @tomsherrod able to get you past the issue?
I'm not able to use NodePort as the type but a LoadBalancer type worked. Here's what I did after the build to deploy to GKE and see the SAS logon screen:
services/httpproxy.yml
file.
It will look like this:
> cat services/httpproxy.yml
---
apiVersion: v1
kind: Service
metadata:
name: sas-viya-httpproxy
spec:
selector:
app: sas-viya-httpproxy
type: LoadBalancer
ports:
- name: "80"
protocol: TCP
port: 80
targetPort: 8080
- name: "443"
protocol: TCP
port: 443
targetPort: 6443
sessionAffinity: None
Change the ingress/sas-viya.yml
file.
namespace/sas-viya.yml
file.> cat ingress/sas-viya.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/server-snippet: |
gzip off;
name: sas-viya-visuals-ingress
namespace: default
spec:
backend:
serviceName: sas-viya-httpproxy
servicePort: 80
Run kubectl apply -f
on the new files, ingress/sas-viya.yml
and services/httpproxy.yml
.
If you get the message The Service "sas-viya-httpproxy" is invalid: spec.clusterIP: Invalid value: "": field is immutable
then try adding the --force
flag onto the kubectl apply -f
command.
kubectl get svc
to find the external IP address. Here's what it shows for me:
> kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.108.0.1 <none> 443/TCP 3h9m
sas-viya-cas ClusterIP None <none> 5570/TCP,5571/TCP,8777/TCP 3h1m
sas-viya-computeserver ClusterIP None <none> 5600/TCP 3h1m
sas-viya-consul ClusterIP None <none> 8200/TCP,8201/TCP,8300/TCP,8301/TCP,8302/TCP,8400/TCP,8500/TCP,8501/TCP,8600/TCP 3h1m
sas-viya-httpproxy LoadBalancer 10.108.1.56 34.69.35.37 80:32422/TCP,443:31534/TCP 53m
sas-viya-pgpoolc ClusterIP None <none> 5431/TCP 3h1m
sas-viya-programming ClusterIP None <none> 7080/TCP 3h1m
sas-viya-rabbitmq ClusterIP None <none> 5672/TCP,15672/TCP 3h1m
sas-viya-sasdatasvrc ClusterIP None <none> 5432/TCP 3h1m
sas-viya-subdomain ClusterIP None <none> 80/TCP 3h1m
@Collinux
sssd
addon integration?With the above changes the ingress service IP cannot access SAS Logon, although I can get to the Login screen using the LoadBalancer (external ip) of http-proxy.
But then when I try to log in using sasboot
I get the following error:
@asbisen
I faced exactly the same problem. I found that every Viya services on the pods tries to talk by using pod hostname. Since sas-viya-httpproxy
is not headless service, sas-viya-httpproxy-0
hostname is not registered kubernetes DNS and consul service discovery fails.
It is not smart way, however following configuration worked for me.
ingress/sas-viya.yml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/server-snippet: |
gzip off;
name: sas-viya-visuals-ingress
namespace: sas-viya
spec:
backend:
serviceName: sas-viya-httpproxy
servicePort: 80
# rules:
# - host: sas-viya.sas-viya.example.com
# http:
# paths:
# - backend:
# serviceName: sas-viya-httpproxy
# servicePort: 80
# tls:
# - hosts:
# - sas-viya.sas-viya.example.com
# secretName: @REPLACE_ME_WITH_YOUR_CERT@
services/httpproxy.yml
---
apiVersion: v1
kind: Service
metadata:
name: sas-viya-httpproxy
spec:
type: NodePort
selector:
app: sas-viya-httpproxy
ports:
- name: "80"
protocol: TCP
port: 80
targetPort: 80
- name: "443"
protocol: TCP
port: 443
targetPort: 443
# sessionAffinity: None
clusterIP: 10.39.243.177
Adding hostAliases to every deployments manifest
Following example is deployments/adminservices.yml
:
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: sas-viya-adminservices
spec:
replicas: 1
template:
metadata:
labels:
app: sas-viya-adminservices
domain: sas-viya
spec:
hostAliases:
- ip : "10.39.243.177"
hostnames:
- "sas-viya-httpproxy-0.sas-viya-httpproxy.sas-viya.svc.cluster.local"
Additionally, GCP load balancer health check does not take 3xx response as successful response.
I added readinessProbe
to deployments/httpproxy.yml
.
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: sas-viya-httpproxy
spec:
serviceName: "sas-viya-httpproxy"
replicas: 1
template:
metadata:
labels:
app: sas-viya-httpproxy
domain: sas-viya
spec:
# Required for TLS configurations
#serviceAccountName: sas-viya-account
subdomain: sas-viya-subdomain
containers:
- name: sas-viya-httpproxy
image: gcr.io/xxxxxxxx/sas-viya-httpproxy:19.04.0-20190525015823-8522290
imagePullPolicy: Always
ports:
- containerPort: 80
- containerPort: 443
readinessProbe:
httpGet:
path: /healthz
port: 80
And creating healthz
page:
kubectl -n sas-viya exec -it sas-viya-httpproxy-0 -- touch /var/www/html/healthz
Describe the bug Google Cloud Platform deployment. By Default sas-viya-httpproxy service is created with detault type ClusterIP. But Ingress configuration doesn't support this type. NodePort service type is supported by Ingress. However with this type hostname sas-viya-httpproxy-0 host name is not resolved across containers.
To Reproduce Steps to reproduce the behavior:
Expected behavior Successful Ingress service type registration or any workarond for Google Cloud Platform
Environment (please complete the applicable information):
Additional context Add any other context about the problem here.