daxio / k8s-lemp

LEMP stack in a Kubernetes cluster
GNU General Public License v3.0
80 stars 44 forks source link

Error Creating nginx-RBAC.yaml #16

Closed Coolfeather2 closed 6 years ago

Coolfeather2 commented 6 years ago

Unable to create RBAC roles so pod fails to start later:

Error from server (Forbidden): error when creating "nginx/nginx-RBAC.yaml": clusterroles.rbac.authorization.k8s.io "nginx-ingress-clusterrole" is forbidden: attempt to grant extra privileges

Pod Error:

F0119 08:16:33.388529 5 main.go:79] ✖ It seems the cluster it is running with Authorization enabled (like RBAC) and there is no permissions for the ingress controller. Please check the configuration
chepurko commented 6 years ago

Can you please post the output of kubectl version?

Also wondering if there's more to the error messages or if they're cut off... Did you copy them from the dashboard? Maybe if you check the error via kubectl describe <pod> you will see complete message?

Coolfeather2 commented 6 years ago

kubectl version:

Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.6", GitCommit:"6260bb08c46c31eea6cb538b34a9ceb3e406689c", GitTreeState:"clean", BuildDate:"2017-12-21T06:34:11Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.6-gke.0", GitCommit:"ee9a97661f14ee0b1ca31d6edd30480c89347c79", GitTreeState:"clean", BuildDate:"2018-01-05T03:36:42Z", GoVersion:"go1.8.3b4", Compiler:"gc", Platform:"linux/amd64"}

kubectl describe pod:

Name:           nginx-5988f7786c-rndv5
Namespace:      nginx-ingress
Node:           gke-cluster-1-default-pool-b58e707f-dcd3/10.128.0.2
Start Time:     Fri, 19 Jan 2018 16:13:27 +0800
Labels:         app=nginx
                pod-template-hash=1544933427
                tier=ingress
Annotations:    kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"nginx-ingress","name":"nginx-5988f7786c","uid":"9f7500b8-fcf0-11e7-a627-42010a800...
Status:         Running
IP:             10.8.2.7
Created By:     ReplicaSet/nginx-5988f7786c
Controlled By:  ReplicaSet/nginx-5988f7786c
Containers:
  nginx:
    Container ID:  docker://4281efad34bc853a0e8a960380d8eb3b6a517c92a552011064d13f6d004aa076
    Image:         quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0
    Image ID:      docker-pullable://quay.io/kubernetes-ingress-controller/nginx-ingress-controller@sha256:39cc6ce23e5bcdf8aa78bc28bbcfe0999e449bf99fe2e8d60984b417facc5cd4
    Ports:         80/TCP, 443/TCP
    Args:
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
      /nginx-ingress-controller
      --default-backend-service=$(POD_NAMESPACE)/default-http-backend
      --configmap=$(POD_NAMESPACE)/nginx
      --annotations-prefix=nginx.ingress.kubernetes.io
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    255
      Started:      Mon, 22 Jan 2018 09:35:55 +0800
      Finished:     Mon, 22 Jan 2018 09:35:55 +0800
    Ready:          False
    Restart Count:  770
    Liveness:       http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Readiness:      http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-5988f7786c-rndv5 (v1:metadata.name)
      POD_NAMESPACE:  nginx-ingress (v1:metadata.namespace)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from nginx-ingress-serviceaccount-token-8xdsb (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  nginx-ingress-serviceaccount-token-8xdsb:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-serviceaccount-token-8xdsb
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.alpha.kubernetes.io/notReady:NoExecute for 300s
                 node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason      Age                   From                                               Message
  ----     ------      ----                  ----                                               -------
  Warning  FailedSync  36s (x18128 over 2d)  kubelet, gke-cluster-1-default-pool-b58e707f-dcd3  Error syncing pod
chepurko commented 6 years ago

I would start by either a) going back to k8s-lemp-v1.3.2 where Kubernetes disables cluster RBAC by default or b) upgrading the whole cluster to k8s-lemp-v1.4 which is running on Kubernetes 1.9.

There is an in-place cluster upgrade that has not once worked for me, so the supported and least disruptive way is creating a new cluster and applying the YAMLs to that one. Yes it's a pain.

I haven't seen any issues in the default YAMLs in this repo yet, so if this doesn't work maybe I can remotely check your cluster? By the way, this isn't minikube right?

P.S. I will also note that nothing in this repo actually installs any Kubernetes versions for you, it just contains YAMLs. You would follow the official instructions from USAGE.md by getting the release you want in the Kubernetes repo.

Coolfeather2 commented 6 years ago

This isn't minikube, GKE I will try k82-lemp-v1.3.2 on 1.8.6 and k8s-lemp-v1.4 on 1.9.1, if neither work ill let you remote in

chepurko commented 6 years ago

Sorry, so that's Google Kubernetes Engine right? Basically running YAMLs on Google own cluster?

That might be the cause of some other issues you've been having, since this is only tested on Google Compute Engine i.e. installing a Kubernetes cluster from scratch on some Google VMs. I'm afraid there are settings and configurations on GKE that this setup doesn't account for. But in any case I can still try and look at it for you to see if this can be applied to other setups.

Coolfeather2 commented 6 years ago

I can add you to the project, just let me know the username to add email me at coolfeather.o@gmail.com

Coolfeather2 commented 6 years ago

Looks like nginx is running fine on 1.8.6, just me being terrible at this 😛

chepurko commented 6 years ago

It's a pretty complex beast, and they do a lot of config-breaking changes between versions. So I would check the official Kubernetes changelog and your cluster version as well before updating the YAMLs.