Closed sharmavijay86 closed 2 years ago
is a Deployment
created?
Yes it is ..
kubectl get deploy -n openunison
NAME READY UP-TO-DATE AVAILABLE AGE
kube-oidc-proxy-orchestra 0/1 1 0 2m6s
openunison-operator 1/1 1 1 95m
openunison-orchestra 1/1 1 1 2m4s
kube-oidc-proxy-orchestra is in pending because it is not getting ingress.
What does your values.yaml
look like? chances are it doesn't like something there. The operator pod's logs should have an error too when creating the Ingress
object
Values.yaml
network:
openunison_host: "k8sou.eks.example.site"
dashboard_host: "k8sdb.eks.example.site"
api_server_host: "xxxxxxxxxxxxxx.eu-west-1.eks.amazonaws.com"
session_inactivity_timeout_seconds: 900
k8s_url: https://xxxxxxxxxxxxxx.eu-west-1.eks.amazonaws.com
createIngressCertificate: false
ingress_type: nginx
ingress_annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt
cert_template:
ou: "Kubernetes"
o: "MyOrg"
l: "My Cluster"
st: "State of Cluster"
c: "MyCountry"
image: "docker.io/tremolosecurity/openunison-k8s-login-oidc:latest"
myvd_config_path: "WEB-INF/myvd.conf"
k8s_cluster_name: my_eks_cluster
enable_impersonation: true
dashboard:
namespace: "kubernetes-dashboard"
cert_name: "kubernetes-dashboard-certs"
label: "k8s-app=kubernetes-dashboard"
service_name: kubernetes-dashboard
certs:
use_k8s_cm: false
trusted_certs: []
#- name: idp
# pem_b64: SDFGSDFGHDFHSDFGSDGSDFGDS
monitoring:
prometheus_service_account: system:serviceaccount:monitoring:prometheus-k8s
oidc:
client_id: xxxxxxxxxxxxxxxxxxxx
auth_url: https://login.microsoftonline.com/xxxxxx/oauth2/v2.0/authorize
token_url: https://login.microsoftonline.com/xxxxxxxxxxx/oauth2/v2.0/token
user_in_idtoken: false
userinfo_url: https://graph.microsoft.com/oidc/userinfo
domain: ""
scopes: openid email profile
claims:
sub: sub
email: email
given_name: given_name
family_name: family_name
display_name: name
groups: groups
impersonation:
use_jetstack: true
jetstack_oidc_proxy_image: quay.io/jetstack/kube-oidc-proxy:v0.3.0
explicit_certificate_trust: false
ca_secret_name: ou-tls-secret
network_policies:
enabled: false
ingress:
enabled: true
labels:
app.kubernetes.io/name: ingress-nginx
monitoring:
enabled: true
labels:
app.kubernetes.io/name: monitoring
apiserver:
enabled: false
labels:
app.kubernetes.io/name: kube-system
services:
enable_tokenrequest: false
token_request_audience: api
token_request_expiration_seconds: 600
node_selectors: []
pullSecret: ""
openunison:
replicas: 1
non_secret_data: {}
secrets: []
I am using letsencrypt as ACME with cert-manager
I am using ingress nginx deployment as
kubectl apply https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.45.0/deploy/static/provider/aws/deploy.yaml
I am using dashboard as
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.5/aio/deploy/recommended.yaml
my secret file looks like
apiVersion: v1
type: Opaque
metadata:
name: orchestra-secrets-source
namespace: openunison
data:
OIDC_CLIENT_SECRET: <base64-clinet-id-from-azure>
K8S_DB_SECRET: aW0gYSBzZWNyZXQ=
unisonKeystorePassword: aW0gYSBzZWNyZXQ=
kind: Secret
I am running on EKS.
This is all resources details are-
$ kubectl get po,svc,deploy,ing -n openunison
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME READY STATUS RESTARTS AGE
pod/kube-oidc-proxy-orchestra-745b674d67-4vczb 0/1 Running 0 66s
pod/openunison-operator-7df75858dc-vbdd2 1/1 Running 0 49m
pod/openunison-orchestra-cf469cbcf-xtbn6 1/1 Running 0 66s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-oidc-proxy-orchestra ClusterIP 172.20.191.41 <none> 443/TCP 68s
service/openunison-orchestra ClusterIP 172.20.64.196 <none> 443/TCP,80/TCP 66s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kube-oidc-proxy-orchestra 0/1 1 0 68s
deployment.apps/openunison-operator 1/1 1 1 49m
deployment.apps/openunison-orchestra 1/1 1 1 66s
$
/the only thing I can think of id api_server_host
is pointing outside of your cluster (maybe there's an admission controller?). when enable_impersonation
is true
then api_server_host
is the host name you want kubectl to use to access openunison's proxy to the api server. When enable_impersonation
is true
k8s_url
is ignored (openunison gets the url from its Pod's DNS). If after changing your values you're still seeing an issue, take a look at the logs for the operator. It should have the error from the API server
closing due to inactivity
Hi may be this is with some updates but i am deploying helm chart. and it is not creating any ingress resource..