rancher / os

Tiny Linux distro that runs the entire OS as Docker containers
https://rancher.com/docs/os/v1.x/en/
Apache License 2.0
6.44k stars 655 forks source link

Cannot follow official docs to install Rancher on Azure #2953

Closed Dmitry1987 closed 4 years ago

Dmitry1987 commented 4 years ago

RancherOS Version: (ros os version) Linux rancher-master 4.14.138-rancher #1 SMP Sat Aug 10 11:25:46 UTC 2019 x86_64 GNU/Linux

Where are you running RancherOS? (docker-machine, AWS, GCE, baremetal, etc.) Azure cloud.

**What's the problem*** I created the Rancher OS VM (2 cpu 7 ram) version 1.5.4 latest from Azure marketplace.

The following steps from the docs just don't work: https://rancher.com/docs/rancher/v2.x/en/quick-start-guide/deployment/quickstart-manual-setup/

Container logs are:


2020/01/12 00:41:59 [INFO] Rancher version v2.3.3 is starting
2020/01/12 00:41:59 [INFO] Rancher arguments {ACMEDomains:[] AddLocal:auto Embedded:false KubeConfig: HTTPListenPort:80 HTTPSListenPort:443 K8sMode:auto Debug:false NoCACerts:false ListenConfig:<nil> AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}
2020/01/12 00:41:59 [INFO] Listening on /tmp/log.sock
2020/01/12 00:41:59 [INFO] Running etcd --data-dir=management-state/etcd
2020-01-12 00:41:59.656427 W | pkg/flags: unrecognized environment variable ETCD_URL_arm64=https://github.com/etcd-io/etcd/releases/download/v3.3.14/etcd-v3.3.14-linux-arm64.tar.gz
2020-01-12 00:41:59.656528 W | pkg/flags: unrecognized environment variable ETCD_URL_amd64=https://github.com/etcd-io/etcd/releases/download/v3.3.14/etcd-v3.3.14-linux-amd64.tar.gz
2020-01-12 00:41:59.656541 W | pkg/flags: unrecognized environment variable ETCD_UNSUPPORTED_ARCH=amd64
2020-01-12 00:41:59.656548 W | pkg/flags: unrecognized environment variable ETCD_URL=ETCD_URL_amd64
2020-01-12 00:41:59.656576 I | etcdmain: etcd Version: 3.3.14
2020-01-12 00:41:59.656633 I | etcdmain: Git SHA: 5cf5d88a1
2020-01-12 00:41:59.656646 I | etcdmain: Go Version: go1.12.9
2020-01-12 00:41:59.656652 I | etcdmain: Go OS/Arch: linux/amd64
2020-01-12 00:41:59.656717 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-01-12 00:41:59.657637 I | embed: listening for peers on http://localhost:2380
2020-01-12 00:41:59.657928 I | embed: listening for client requests on localhost:2379
2020-01-12 00:41:59.712207 I | etcdserver: name = default
2020-01-12 00:41:59.712293 I | etcdserver: data dir = management-state/etcd
2020-01-12 00:41:59.712323 I | etcdserver: member dir = management-state/etcd/member
2020-01-12 00:41:59.712414 I | etcdserver: heartbeat = 100ms
2020-01-12 00:41:59.712524 I | etcdserver: election = 1000ms
2020-01-12 00:41:59.712637 I | etcdserver: snapshot count = 100000
2020-01-12 00:41:59.712676 I | etcdserver: advertise client URLs = http://localhost:2379
2020-01-12 00:41:59.712686 I | etcdserver: initial advertise peer URLs = http://localhost:2380
2020-01-12 00:41:59.712699 I | etcdserver: initial cluster = default=http://localhost:2380
2020-01-12 00:41:59.759104 I | etcdserver: starting member 8e9e05c52164694d in cluster cdf818194e3a8c32
2020-01-12 00:41:59.759156 I | raft: 8e9e05c52164694d became follower at term 0
2020-01-12 00:41:59.759171 I | raft: newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
2020-01-12 00:41:59.759178 I | raft: 8e9e05c52164694d became follower at term 1
2020-01-12 00:41:59.836104 W | auth: simple token is not cryptographically signed
2020-01-12 00:41:59.896714 I | etcdserver: starting server... [version: 3.3.14, cluster version: to_be_decided]
2020-01-12 00:41:59.897451 I | etcdserver: 8e9e05c52164694d as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-01-12 00:41:59.898034 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2020-01-12 00:42:00.559813 I | raft: 8e9e05c52164694d is starting a new election at term 1
2020-01-12 00:42:00.559863 I | raft: 8e9e05c52164694d became candidate at term 2
2020-01-12 00:42:00.559882 I | raft: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2
2020-01-12 00:42:00.560102 I | raft: 8e9e05c52164694d became leader at term 2
2020-01-12 00:42:00.560122 I | raft: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2
2020-01-12 00:42:00.560507 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2020-01-12 00:42:00.561049 I | etcdserver: setting up the initial cluster version to 3.3
2020-01-12 00:42:00.561260 I | embed: ready to serve client requests
2020-01-12 00:42:00.562747 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2020-01-12 00:42:00.584426 N | etcdserver/membership: set the initial cluster version to 3.3
2020-01-12 00:42:00.600072 I | etcdserver/api: enabled capabilities for version 3.3
2020/01/12 00:42:00 [INFO] Waiting for k3s to start
time="2020-01-12T00:42:00Z" level=info msg="Preparing data dir /var/lib/rancher/k3s/data/de37a675b342fcd56e57fd5707882786b0e0c840862d6ddc1e8f5c391fb424c9"
2020/01/12 00:42:01 [INFO] Waiting for k3s to start
2020/01/12 00:42:02 [INFO] Waiting for k3s to start
time="2020-01-12T00:42:02.737968293Z" level=info msg="Starting k3s v0.8.0 (f867995f)"
2020/01/12 00:42:03 [INFO] Waiting for k3s to start
time="2020-01-12T00:42:03.893758779Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --api-audiences=unknown --authorization-mode=Node,RBAC --basic-auth-file=/var/lib/rancher/k3s/server/cred/passwd --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --enable-admission-plugins=NodeRestriction --etcd-servers=http://localhost:2379 --insecure-port=0 --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=k3s --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
2020/01/12 00:42:04 [INFO] Waiting for k3s to start
E0112 00:42:05.265679      28 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.266241      28 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.266331      28 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.266410      28 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.266456      28 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.266515      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
W0112 00:42:05.468286      28 genericapiserver.go:315] Skipping API batch/v2alpha1 because it has no resources.
W0112 00:42:05.478060      28 genericapiserver.go:315] Skipping API node.k8s.io/v1alpha1 because it has no resources.
E0112 00:42:05.503968      28 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.504039      28 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.504123      28 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.504177      28 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.504208      28 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0112 00:42:05.504231      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
time="2020-01-12T00:42:05.524060575Z" level=info msg="Running kube-scheduler --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --port=10251 --secure-port=0"
time="2020-01-12T00:42:05.529011211Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --port=10252 --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=0 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
time="2020-01-12T00:42:05.634667913Z" level=info msg="Creating CRD listenerconfigs.k3s.cattle.io"
2020/01/12 00:42:05 [INFO] Waiting for k3s to start
E0112 00:42:05.651151      28 leaderelection.go:306] error retrieving resource lock kube-system/kube-controller-manager: endpoints "kube-controller-manager" is forbidden: User "system:kube-controller-manager" cannot get resource "endpoints" in API group "" in the namespace "kube-system"
W0112 00:42:05.670610      28 authorization.go:47] Authorization is disabled
W0112 00:42:05.670797      28 authentication.go:55] Authentication is disabled
E0112 00:42:05.743372      28 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0112 00:42:05.744670      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0112 00:42:05.746122      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0112 00:42:05.747589      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0112 00:42:05.747755      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0112 00:42:05.747876      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0112 00:42:05.748566      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0112 00:42:05.748708      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0112 00:42:05.749401      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0112 00:42:05.749758      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0112 00:42:05.773843      28 controller.go:147] Unable to perform initial Kubernetes service initialization: Service "kubernetes" is invalid: spec.clusterIP: Invalid value: "10.43.0.1": cannot allocate resources of type serviceipallocations at this time
time="2020-01-12T00:42:05.774505453Z" level=info msg="Creating CRD addons.k3s.cattle.io"
E0112 00:42:05.779065      28 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
time="2020-01-12T00:42:05.790056980Z" level=info msg="Creating CRD helmcharts.helm.cattle.io"
time="2020-01-12T00:42:05.841943005Z" level=info msg="Waiting for CRD helmcharts.helm.cattle.io to become available"
time="2020-01-12T00:42:06.344951815Z" level=info msg="Done waiting for CRD helmcharts.helm.cattle.io to become available"
time="2020-01-12T00:42:06.361457368Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-1.64.0.tgz"
time="2020-01-12T00:42:06.361928281Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
E0112 00:42:06.362288      28 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_depth" is not a valid metric name
E0112 00:42:06.362351      28 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_adds" is not a valid metric name
E0112 00:42:06.362397      28 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=Addon before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_queue_latency" is not a valid metric name
E0112 00:42:06.362434      28 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=Addon takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_work_duration" is not a valid metric name
E0112 00:42:06.362464      28 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=Addon has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_unfinished_work_seconds" is not a valid metric name
E0112 00:42:06.362497      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=Addon been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:06.362575      28 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=Addon: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=Addon_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=Addon", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=Addon_retries" is not a valid metric name
time="2020-01-12T00:42:06.368765268Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
time="2020-01-12T00:42:06.369046876Z" level=info msg="Listening on :6443"
E0112 00:42:06.379594      28 prometheus.go:138] failed to register depth metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_depth", help: "(Deprecated) Current depth of workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_depth" is not a valid metric name
E0112 00:42:06.379646      28 prometheus.go:150] failed to register adds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_adds", help: "(Deprecated) Total number of adds handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_adds" is not a valid metric name
E0112 00:42:06.379683      28 prometheus.go:162] failed to register latency metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency", help: "(Deprecated) How long an item stays in workqueuek3s.cattle.io/v1, Kind=ListenerConfig before being requested.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_queue_latency" is not a valid metric name
E0112 00:42:06.379753      28 prometheus.go:174] failed to register work_duration metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration", help: "(Deprecated) How long processing an item from workqueuek3s.cattle.io/v1, Kind=ListenerConfig takes.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_work_duration" is not a valid metric name
E0112 00:42:06.379914      28 prometheus.go:189] failed to register unfinished_work_seconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds", help: "(Deprecated) How many seconds of work k3s.cattle.io/v1, Kind=ListenerConfig has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_unfinished_work_seconds" is not a valid metric name
E0112 00:42:06.379948      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for k3s.cattle.io/v1, Kind=ListenerConfig been running.", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:06.380179      28 prometheus.go:214] failed to register retries metric k3s.cattle.io/v1, Kind=ListenerConfig: descriptor Desc{fqName: "k3s.cattle.io/v1, Kind=ListenerConfig_retries", help: "(Deprecated) Total number of retries handled by workqueue: k3s.cattle.io/v1, Kind=ListenerConfig", constLabels: {}, variableLabels: []} is invalid: "k3s.cattle.io/v1, Kind=ListenerConfig_retries" is not a valid metric name
2020/01/12 00:42:06 [INFO] Waiting for k3s to start
E0112 00:42:06.745480      28 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0112 00:42:06.746742      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0112 00:42:06.747974      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0112 00:42:06.749428      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0112 00:42:06.750423      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0112 00:42:06.752197      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0112 00:42:06.754479      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0112 00:42:06.754689      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0112 00:42:06.759157      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0112 00:42:06.759173      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
time="2020-01-12T00:42:06.882136260Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2020-01-12T00:42:06.982425212Z" level=info msg="Starting k3s.cattle.io/v1, Kind=ListenerConfig controller"
time="2020-01-12T00:42:06.982681119Z" level=error msg="Update cert unable to convert string to cert: Unable to split cert into two parts"
time="2020-01-12T00:42:06.983217734Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/node-token"
time="2020-01-12T00:42:06.983260835Z" level=info msg="To join node to cluster: k3s agent -s https://172.17.0.2:6443 -t ${NODE_TOKEN}"
E0112 00:42:07.013146      28 prometheus.go:138] failed to register depth metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_depth" is not a valid metric name
E0112 00:42:07.013413      28 prometheus.go:150] failed to register adds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_adds" is not a valid metric name
E0112 00:42:07.013604      28 prometheus.go:162] failed to register latency metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Node before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_queue_latency" is not a valid metric name
E0112 00:42:07.013804      28 prometheus.go:174] failed to register work_duration metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Node takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_work_duration" is not a valid metric name
E0112 00:42:07.013944      28 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Node has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.014128      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Node been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.014312      28 prometheus.go:214] failed to register retries metric /v1, Kind=Node: descriptor Desc{fqName: "/v1, Kind=Node_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Node", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Node_retries" is not a valid metric name
E0112 00:42:07.014721      28 prometheus.go:138] failed to register depth metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_depth", help: "(Deprecated) Current depth of workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_depth" is not a valid metric name
E0112 00:42:07.014858      28 prometheus.go:150] failed to register adds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_adds", help: "(Deprecated) Total number of adds handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_adds" is not a valid metric name
E0112 00:42:07.015153      28 prometheus.go:162] failed to register latency metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_queue_latency", help: "(Deprecated) How long an item stays in workqueuebatch/v1, Kind=Job before being requested.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_queue_latency" is not a valid metric name
E0112 00:42:07.015360      28 prometheus.go:174] failed to register work_duration metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_work_duration", help: "(Deprecated) How long processing an item from workqueuebatch/v1, Kind=Job takes.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_work_duration" is not a valid metric name
E0112 00:42:07.015546      28 prometheus.go:189] failed to register unfinished_work_seconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_unfinished_work_seconds", help: "(Deprecated) How many seconds of work batch/v1, Kind=Job has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.015671      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for batch/v1, Kind=Job been running.", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.015763      28 prometheus.go:214] failed to register retries metric batch/v1, Kind=Job: descriptor Desc{fqName: "batch/v1, Kind=Job_retries", help: "(Deprecated) Total number of retries handled by workqueue: batch/v1, Kind=Job", constLabels: {}, variableLabels: []} is invalid: "batch/v1, Kind=Job_retries" is not a valid metric name
E0112 00:42:07.015865      28 prometheus.go:138] failed to register depth metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_depth", help: "(Deprecated) Current depth of workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_depth" is not a valid metric name
E0112 00:42:07.015891      28 prometheus.go:150] failed to register adds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_adds", help: "(Deprecated) Total number of adds handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_adds" is not a valid metric name
E0112 00:42:07.015939      28 prometheus.go:162] failed to register latency metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_queue_latency", help: "(Deprecated) How long an item stays in workqueuehelm.cattle.io/v1, Kind=HelmChart before being requested.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_queue_latency" is not a valid metric name
E0112 00:42:07.016013      28 prometheus.go:174] failed to register work_duration metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_work_duration", help: "(Deprecated) How long processing an item from workqueuehelm.cattle.io/v1, Kind=HelmChart takes.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_work_duration" is not a valid metric name
E0112 00:42:07.016044      28 prometheus.go:189] failed to register unfinished_work_seconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds", help: "(Deprecated) How many seconds of work helm.cattle.io/v1, Kind=HelmChart has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.016068      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for helm.cattle.io/v1, Kind=HelmChart been running.", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.016108      28 prometheus.go:214] failed to register retries metric helm.cattle.io/v1, Kind=HelmChart: descriptor Desc{fqName: "helm.cattle.io/v1, Kind=HelmChart_retries", help: "(Deprecated) Total number of retries handled by workqueue: helm.cattle.io/v1, Kind=HelmChart", constLabels: {}, variableLabels: []} is invalid: "helm.cattle.io/v1, Kind=HelmChart_retries" is not a valid metric name
E0112 00:42:07.016250      28 prometheus.go:138] failed to register depth metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_depth" is not a valid metric name
E0112 00:42:07.016300      28 prometheus.go:150] failed to register adds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_adds" is not a valid metric name
E0112 00:42:07.016376      28 prometheus.go:162] failed to register latency metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Service before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_queue_latency" is not a valid metric name
E0112 00:42:07.016420      28 prometheus.go:174] failed to register work_duration metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Service takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_work_duration" is not a valid metric name
E0112 00:42:07.016449      28 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Service has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.016472      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Service been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.016510      28 prometheus.go:214] failed to register retries metric /v1, Kind=Service: descriptor Desc{fqName: "/v1, Kind=Service_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Service", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Service_retries" is not a valid metric name
E0112 00:42:07.016603      28 prometheus.go:138] failed to register depth metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_depth" is not a valid metric name
E0112 00:42:07.016641      28 prometheus.go:150] failed to register adds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_adds" is not a valid metric name
E0112 00:42:07.016692      28 prometheus.go:162] failed to register latency metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Pod before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_queue_latency" is not a valid metric name
E0112 00:42:07.016734      28 prometheus.go:174] failed to register work_duration metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Pod takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_work_duration" is not a valid metric name
E0112 00:42:07.016762      28 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Pod has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.016784      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Pod been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.016823      28 prometheus.go:214] failed to register retries metric /v1, Kind=Pod: descriptor Desc{fqName: "/v1, Kind=Pod_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Pod", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Pod_retries" is not a valid metric name
E0112 00:42:07.016924      28 prometheus.go:138] failed to register depth metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_depth", help: "(Deprecated) Current depth of workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_depth" is not a valid metric name
E0112 00:42:07.016948      28 prometheus.go:150] failed to register adds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_adds", help: "(Deprecated) Total number of adds handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_adds" is not a valid metric name
E0112 00:42:07.016993      28 prometheus.go:162] failed to register latency metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_queue_latency", help: "(Deprecated) How long an item stays in workqueue/v1, Kind=Endpoints before being requested.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_queue_latency" is not a valid metric name
E0112 00:42:07.017037      28 prometheus.go:174] failed to register work_duration metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_work_duration", help: "(Deprecated) How long processing an item from workqueue/v1, Kind=Endpoints takes.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_work_duration" is not a valid metric name
E0112 00:42:07.017111      28 prometheus.go:189] failed to register unfinished_work_seconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_unfinished_work_seconds", help: "(Deprecated) How many seconds of work /v1, Kind=Endpoints has done that is in progress and hasn't been observed by work_duration. Large values indicate stuck threads. One can deduce the number of stuck threads by observing the rate at which this increases.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_unfinished_work_seconds" is not a valid metric name
E0112 00:42:07.017135      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_longest_running_processor_microseconds", help: "(Deprecated) How many microseconds has the longest running processor for /v1, Kind=Endpoints been running.", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_longest_running_processor_microseconds" is not a valid metric name
E0112 00:42:07.017174      28 prometheus.go:214] failed to register retries metric /v1, Kind=Endpoints: descriptor Desc{fqName: "/v1, Kind=Endpoints_retries", help: "(Deprecated) Total number of retries handled by workqueue: /v1, Kind=Endpoints", constLabels: {}, variableLabels: []} is invalid: "/v1, Kind=Endpoints_retries" is not a valid metric name
time="2020-01-12T00:42:07.092109422Z" level=info msg="Wrote kubeconfig /etc/rancher/k3s/k3s.yaml"
time="2020-01-12T00:42:07.092158223Z" level=info msg="Run: k3s kubectl"
time="2020-01-12T00:42:07.092170723Z" level=info msg="k3s is up and running"
2020/01/12 00:42:07 [INFO] Running in single server mode, will not peer connections
2020/01/12 00:42:07 [INFO] Creating CRD authconfigs.management.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD apps.project.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD catalogs.management.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD apprevisions.project.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD pipelineexecutions.project.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD catalogtemplates.management.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD catalogtemplateversions.management.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD pipelinesettings.project.cattle.io
E0112 00:42:07.748471      28 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0112 00:42:07.749216      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0112 00:42:07.754571      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0112 00:42:07.754811      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0112 00:42:07.755240      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0112 00:42:07.756323      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0112 00:42:07.758721      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0112 00:42:07.760747      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0112 00:42:07.761558      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2020/01/12 00:42:07 [INFO] Creating CRD clusteralerts.management.cattle.io
2020/01/12 00:42:07 [INFO] Creating CRD pipelines.project.cattle.io
E0112 00:42:07.766536      28 reflector.go:126] k8s.io/client-go/informers/factory.go:130: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2020/01/12 00:42:07 [INFO] Creating CRD clusteralertgroups.management.cattle.io
time="2020-01-12T00:42:08.019292355Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
2020/01/12 00:42:08 [INFO] Creating CRD sourcecodecredentials.project.cattle.io
2020/01/12 00:42:08 [INFO] Creating CRD clustercatalogs.management.cattle.io
2020/01/12 00:42:08 [INFO] Creating CRD sourcecodeproviderconfigs.project.cattle.io
time="2020-01-12T00:42:08.520285391Z" level=info msg="Starting batch/v1, Kind=Job controller"
2020/01/12 00:42:08 [INFO] Creating CRD clusterloggings.management.cattle.io
2020/01/12 00:42:08 [INFO] Creating CRD sourcecoderepositories.project.cattle.io
2020/01/12 00:42:09 [INFO] Creating CRD clusteralertrules.management.cattle.io
time="2020-01-12T00:42:09.225710527Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2020-01-12T00:42:09.326093877Z" level=info msg="Starting /v1, Kind=Node controller"
time="2020-01-12T00:42:09.426274722Z" level=info msg="Starting /v1, Kind=Service controller"
2020/01/12 00:42:09 [INFO] Creating CRD clustermonitorgraphs.management.cattle.io
time="2020-01-12T00:42:09.526437166Z" level=info msg="Starting /v1, Kind=Pod controller"
W0112 00:42:09.585745      28 lease.go:222] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
2020/01/12 00:42:09 [INFO] Creating CRD clusterregistrationtokens.management.cattle.io
2020/01/12 00:42:10 [INFO] Creating CRD clusterroletemplatebindings.management.cattle.io
2020/01/12 00:42:10 [INFO] Creating CRD clusterscans.management.cattle.io
2020/01/12 00:42:10 [INFO] Creating CRD clusters.management.cattle.io
2020/01/12 00:42:10 [INFO] Creating CRD composeconfigs.management.cattle.io
2020/01/12 00:42:10 [INFO] Creating CRD dynamicschemas.management.cattle.io
2020/01/12 00:42:11 [INFO] Creating CRD etcdbackups.management.cattle.io
2020/01/12 00:42:11 [INFO] Creating CRD features.management.cattle.io
2020/01/12 00:42:11 [INFO] Creating CRD globalrolebindings.management.cattle.io
E0112 00:42:11.664981      28 resource_quota_controller.go:171] initial monitor sync has error: [couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterscans": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterscans", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterregistrationtokens": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustermonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustercatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecoderepositories": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelinesettings": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelineexecutions": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "management.cattle.io/v3, Resource=etcdbackups": unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource "project.cattle.io/v3, Resource=apps": unable to monitor quota for resource "project.cattle.io/v3, Resource=apps", couldn't start monitor for resource "project.cattle.io/v3, Resource=apprevisions": unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplateversions": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelines": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodecredentials": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings"]
2020/01/12 00:42:11 [INFO] Creating CRD globalroles.management.cattle.io
2020/01/12 00:42:11 [INFO] Creating CRD groupmembers.management.cattle.io
2020/01/12 00:42:12 [INFO] Creating CRD groups.management.cattle.io
2020/01/12 00:42:12 [INFO] Creating CRD kontainerdrivers.management.cattle.io
2020/01/12 00:42:12 [INFO] Creating CRD listenconfigs.management.cattle.io
2020/01/12 00:42:12 [INFO] Creating CRD multiclusterapps.management.cattle.io
2020/01/12 00:42:12 [INFO] Creating CRD multiclusterapprevisions.management.cattle.io
2020/01/12 00:42:13 [INFO] Creating CRD monitormetrics.management.cattle.io
2020/01/12 00:42:13 [INFO] Creating CRD nodedrivers.management.cattle.io
2020/01/12 00:42:13 [INFO] Creating CRD nodepools.management.cattle.io
2020/01/12 00:42:13 [INFO] Creating CRD nodetemplates.management.cattle.io
2020/01/12 00:42:13 [INFO] Creating CRD nodes.management.cattle.io
2020/01/12 00:42:14 [INFO] Creating CRD notifiers.management.cattle.io
2020/01/12 00:42:14 [INFO] Creating CRD podsecuritypolicytemplateprojectbindings.management.cattle.io
2020/01/12 00:42:14 [INFO] Creating CRD podsecuritypolicytemplates.management.cattle.io
2020/01/12 00:42:14 [INFO] Creating CRD preferences.management.cattle.io
2020/01/12 00:42:14 [INFO] Creating CRD projectalerts.management.cattle.io
2020/01/12 00:42:15 [INFO] Creating CRD projectalertgroups.management.cattle.io
2020/01/12 00:42:15 [INFO] Creating CRD projectcatalogs.management.cattle.io
2020/01/12 00:42:15 [INFO] Creating CRD projectloggings.management.cattle.io
2020/01/12 00:42:15 [INFO] Creating CRD projectalertrules.management.cattle.io
2020/01/12 00:42:15 [INFO] Creating CRD projectmonitorgraphs.management.cattle.io
2020/01/12 00:42:16 [INFO] Creating CRD projectnetworkpolicies.management.cattle.io
2020/01/12 00:42:16 [INFO] Creating CRD projectroletemplatebindings.management.cattle.io
2020/01/12 00:42:16 [INFO] Creating CRD projects.management.cattle.io
2020/01/12 00:42:16 [INFO] Creating CRD rkek8ssystemimages.management.cattle.io
2020/01/12 00:42:16 [INFO] Creating CRD rkek8sserviceoptions.management.cattle.io
2020/01/12 00:42:17 [INFO] Creating CRD rkeaddons.management.cattle.io
2020/01/12 00:42:17 [INFO] Creating CRD roletemplates.management.cattle.io
2020/01/12 00:42:17 [INFO] Creating CRD settings.management.cattle.io
2020/01/12 00:42:17 [INFO] Creating CRD templates.management.cattle.io
2020/01/12 00:42:17 [INFO] Creating CRD templateversions.management.cattle.io
2020/01/12 00:42:18 [INFO] Creating CRD templatecontents.management.cattle.io
2020/01/12 00:42:18 [INFO] Creating CRD tokens.management.cattle.io
2020/01/12 00:42:18 [INFO] Creating CRD userattributes.management.cattle.io
2020/01/12 00:42:18 [INFO] Creating CRD users.management.cattle.io
2020/01/12 00:42:18 [INFO] Creating CRD globaldnses.management.cattle.io
2020/01/12 00:42:19 [INFO] Creating CRD globaldnsproviders.management.cattle.io
2020/01/12 00:42:19 [INFO] Creating CRD clustertemplates.management.cattle.io
2020/01/12 00:42:19 [INFO] Creating CRD clustertemplaterevisions.management.cattle.io
W0112 00:42:21.695408      28 controllermanager.go:445] Skipping "root-ca-cert-publisher"
2020/01/12 00:42:21 [INFO] Starting API controllers
2020/01/12 00:42:22 http: TLS handshake error from 127.0.0.1:42060: EOF
2020/01/12 00:42:22 http: TLS handshake error from 127.0.0.1:42058: EOF
2020/01/12 00:42:22 http: TLS handshake error from 127.0.0.1:42066: EOF
2020/01/12 00:42:22 http: TLS handshake error from 127.0.0.1:42064: EOF
2020/01/12 00:42:22 http: TLS handshake error from 127.0.0.1:42068: EOF
2020-01-12 00:42:24.272676 W | wal: sync duration of 1.314833232s, expected less than 1s
2020-01-12 00:42:24.304254 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:536" took too long (392.878874ms) to execute
2020-01-12 00:42:24.304767 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:554" took too long (144.655529ms) to execute
2020-01-12 00:42:24.305089 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:468" took too long (1.043363553s) to execute
2020-01-12 00:42:24.305344 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:279" took too long (1.347379217s) to execute
2020-01-12 00:42:25.257165 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:0 size:5" took too long (873.244917ms) to execute
2020/01/12 00:42:25 [INFO] Starting catalog controller
2020/01/12 00:42:25 [INFO] Starting project-level catalog controller
2020/01/12 00:42:25 [INFO] Starting cluster-level catalog controller
2020/01/12 00:42:25 [INFO] Starting management controllers
E0112 00:42:25.848304      28 prometheus.go:138] failed to register depth metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.848366      28 prometheus.go:150] failed to register adds metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.848931      28 prometheus.go:162] failed to register latency metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.849236      28 prometheus.go:174] failed to register work_duration metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.849433      28 prometheus.go:189] failed to register unfinished_work_seconds metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.849704      28 prometheus.go:202] failed to register longest_running_processor_microseconds metric certificate: duplicate metrics collector registration attempted
E0112 00:42:25.850097      28 prometheus.go:214] failed to register retries metric certificate: duplicate metrics collector registration attempted
2020/01/12 00:42:25 [INFO] Listening on :443
2020/01/12 00:42:25 [INFO] Listening on :80
2020/01/12 00:42:25 [INFO] Reconciling GlobalRoles
2020/01/12 00:42:25 [INFO] Creating authn-manage
2020/01/12 00:42:25 [INFO] Creating podsecuritypolicytemplates-manage
2020/01/12 00:42:25 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-authn-manage for corresponding GlobalRole
2020/01/12 00:42:25 [INFO] Creating user
2020/01/12 00:42:25 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-podsecuritypolicytemplates-manage for corresponding GlobalRole
2020/01/12 00:42:25 [INFO] Creating nodedrivers-manage
2020/01/12 00:42:25 [INFO] Creating catalogs-use
2020/01/12 00:42:25 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-user for corresponding GlobalRole
2020/01/12 00:42:25 [INFO] Creating settings-manage
2020/01/12 00:42:25 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-nodedrivers-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-catalogs-use for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating clustertemplates-create
2020/01/12 00:42:26 [INFO] Creating admin
2020/01/12 00:42:26 [INFO] Creating user-base
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-settings-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-clustertemplates-create for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating kontainerdrivers-manage
W0112 00:42:26.088912      28 probe.go:268] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-admin for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-user-base for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating catalogs-manage
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-kontainerdrivers-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating roles-manage
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-catalogs-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating clusters-create
2020/01/12 00:42:26 [INFO] Creating users-manage
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-roles-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-clusters-create for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating features-manage
2020/01/12 00:42:26 [INFO] Reconciling RoleTemplates
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-users-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating projects-view
2020/01/12 00:42:26 [INFO] [mgmt-auth-gr-controller] Creating clusterRole cattle-globalrole-features-manage for corresponding GlobalRole
2020/01/12 00:42:26 [INFO] Creating persistentvolumeclaims-manage
E0112 00:42:26.323322      28 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodecredentials": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelineexecutions": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource "management.cattle.io/v3, Resource=monitormetrics": unable to monitor quota for resource "management.cattle.io/v3, Resource=monitormetrics", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecoderepositories": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapps": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapps", couldn't start monitor for resource "project.cattle.io/v3, Resource=apprevisions": unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterregistrationtokens": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterscans": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterscans", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapprevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapprevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=etcdbackups": unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustermonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource "project.cattle.io/v3, Resource=apps": unable to monitor quota for resource "project.cattle.io/v3, Resource=apps", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelinesettings": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelines": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustercatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplateversions": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts"]
2020/01/12 00:42:26 [INFO] Creating project-member
2020/01/12 00:42:26 [INFO] Creating workloads-manage
2020/01/12 00:42:26 [INFO] Creating workloads-view
2020/01/12 00:42:26 [INFO] Creating services-view
2020/01/12 00:42:26 [INFO] Creating serviceaccounts-view
E0112 00:42:26.869854      28 clusterroleaggregation_controller.go:180] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
2020/01/12 00:42:26 [INFO] Creating cluster-admin
2020/01/12 00:42:26 [INFO] Creating projects-create
E0112 00:42:26.917705      28 clusterroleaggregation_controller.go:180] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again
W0112 00:42:26.919448      28 reflector.go:289] k8s.io/kubernetes/pkg/controller/garbagecollector/graph_builder.go:124: watch of <nil> ended with: too old resource version: 624 (627)
2020/01/12 00:42:26 [INFO] Creating clusterroletemplatebindings-manage
2020/01/12 00:42:26 [INFO] Creating projectroletemplatebindings-view
2020/01/12 00:42:27 [INFO] Creating projectcatalogs-manage
2020/01/12 00:42:27 [INFO] Creating clusterscans-manage
2020/01/12 00:42:27 [INFO] Creating project-owner
2020/01/12 00:42:27 [INFO] Creating persistentvolumeclaims-view
2020/01/12 00:42:27 [INFO] Creating edit
2020/01/12 00:42:27 [INFO] Creating nodes-manage
2020/01/12 00:42:27 [INFO] Creating clusterroletemplatebindings-view
2020/01/12 00:42:27 [INFO] Creating configmaps-view
2020/01/12 00:42:27 [INFO] Creating projectroletemplatebindings-manage
2020/01/12 00:42:27 [INFO] Creating cluster-owner
2020/01/12 00:42:27 [INFO] Creating services-manage
2020/01/12 00:42:27 [INFO] Creating secrets-manage
2020/01/12 00:42:27 [INFO] Creating view
2020/01/12 00:42:27 [INFO] Creating serviceaccounts-manage
2020/01/12 00:42:27 [INFO] Creating backups-manage
2020/01/12 00:42:27 [INFO] Creating read-only
2020/01/12 00:42:27 [INFO] Creating ingress-manage
2020/01/12 00:42:27 [INFO] Creating project-monitoring-readonly
2020/01/12 00:42:27 [INFO] Creating cluster-member
2020/01/12 00:42:27 [INFO] Creating storage-manage
2020/01/12 00:42:27 [INFO] Creating clustercatalogs-manage
2020/01/12 00:42:27 [INFO] Creating clustercatalogs-view
2020/01/12 00:42:27 [INFO] Creating create-ns
2020/01/12 00:42:27 [INFO] Creating configmaps-manage
2020/01/12 00:42:27 [INFO] Creating secrets-view
2020/01/12 00:42:27 [INFO] Creating projectcatalogs-view
2020/01/12 00:42:27 [INFO] driverMetadata: refresh data
2020/01/12 00:42:27 [INFO] Creating admin
2020/01/12 00:42:27 [INFO] Creating nodes-view
2020/01/12 00:42:27 [ERROR] SettingController rke-metadata-config [rke-metadata-handler] failed with : namespaces "cattle-global-data" not found
2020/01/12 00:42:27 [INFO] Creating ingress-view
2020/01/12 00:42:28 [INFO] Created default admin user and binding
2020/01/12 00:42:28 [INFO] Creating new GlobalRoleBinding for GlobalRoleBinding globalrolebinding-7bd54
2020/01/12 00:42:28 [INFO] [mgmt-auth-grb-controller] Creating clusterRoleBinding for globalRoleBinding globalrolebinding-7bd54 for user user-t5q7j with role cattle-globalrole-admin
2020/01/12 00:42:28 [INFO] adding kontainer driver rancherKubernetesEngine
2020/01/12 00:42:28 [INFO] adding kontainer driver googleKubernetesEngine
2020/01/12 00:42:28 [INFO] adding kontainer driver azureKubernetesService
2020/01/12 00:42:28 [INFO] create kontainerdriver rancherkubernetesengine
2020/01/12 00:42:28 [INFO] create kontainerdriver googlekubernetesengine
2020/01/12 00:42:28 [INFO] adding kontainer driver amazonElasticContainerService
2020/01/12 00:42:28 [INFO] adding kontainer driver baiducloudcontainerengine
2020/01/12 00:42:28 [INFO] create kontainerdriver rancherkubernetesengine
2020/01/12 00:42:28 [INFO] create kontainerdriver azurekubernetesservice
2020/01/12 00:42:28 [INFO] create kontainerdriver googlekubernetesengine
2020/01/12 00:42:28 [INFO] update kontainerdriver rancherkubernetesengine
2020/01/12 00:42:28 [INFO] adding kontainer driver aliyunkubernetescontainerservice
2020/01/12 00:42:28 [INFO] create kontainerdriver amazonelasticcontainerservice
2020/01/12 00:42:28 [INFO] update kontainerdriver azurekubernetesservice
2020/01/12 00:42:28 [INFO] create kontainerdriver baiducloudcontainerengine
2020/01/12 00:42:28 [INFO] adding kontainer driver tencentkubernetesengine
2020/01/12 00:42:28 [INFO] update kontainerdriver googlekubernetesengine
2020/01/12 00:42:28 [INFO] create kontainerdriver aliyunkubernetescontainerservice
2020/01/12 00:42:28 [INFO] create kontainerdriver amazonelasticcontainerservice
2020/01/12 00:42:28 [INFO] adding kontainer driver huaweicontainercloudengine
2020/01/12 00:42:28 [INFO] create kontainerdriver baiducloudcontainerengine
2020/01/12 00:42:28 [INFO] create kontainerdriver tencentkubernetesengine
2020/01/12 00:42:28 [INFO] update kontainerdriver amazonelasticcontainerservice
2020/01/12 00:42:28 [INFO] Created cattle-global-nt namespace
2020/01/12 00:42:28 [INFO] Creating node driver pinganyunecs
2020/01/12 00:42:28 [INFO] update kontainerdriver baiducloudcontainerengine
2020/01/12 00:42:28 [INFO] create kontainerdriver huaweicontainercloudengine
2020/01/12 00:42:28 [INFO] create kontainerdriver aliyunkubernetescontainerservice
2020/01/12 00:42:28 [INFO] Creating node driver aliyunecs
2020/01/12 00:42:28 [INFO] update kontainerdriver huaweicontainercloudengine
2020/01/12 00:42:28 [INFO] Creating node driver amazonec2
2020/01/12 00:42:28 [INFO] create kontainerdriver tencentkubernetesengine
2020/01/12 00:42:28 [INFO] update kontainerdriver aliyunkubernetescontainerservice
2020/01/12 00:42:28 [INFO] Creating node driver azure
2020/01/12 00:42:28 [INFO] Creating node driver cloudca
2020/01/12 00:42:28 [INFO] Creating node driver digitalocean
2020/01/12 00:42:28 [INFO] update kontainerdriver tencentkubernetesengine
2020/01/12 00:42:29 [INFO] Creating node driver exoscale
2020/01/12 00:42:29 [INFO] Creating node driver linode
2020/01/12 00:42:29 [INFO] Creating node driver openstack
2020/01/12 00:42:29 [INFO] Creating node driver otc
2020/01/12 00:42:29 [INFO] Creating node driver packet
2020/01/12 00:42:29 [INFO] Creating node driver rackspace
2020/01/12 00:42:29 [INFO] Creating node driver softlayer
2020/01/12 00:42:29 [INFO] Creating node driver vmwarevsphere
2020/01/12 00:42:29 [INFO] Rancher startup complete
2020/01/12 00:42:29 [INFO] update kontainerdriver rancherkubernetesengine
2020/01/12 00:42:29 [INFO] update kontainerdriver baiducloudcontainerengine
2020/01/12 00:42:29 [INFO] update kontainerdriver aliyunkubernetescontainerservice
2020/01/12 00:42:29 [INFO] update kontainerdriver tencentkubernetesengine
2020/01/12 00:42:29 [INFO] uploading amazonec2Config to nodeconfig schema
2020/01/12 00:42:29 [INFO] Updating global catalog system-library
2020/01/12 00:42:29 [INFO] uploading amazonec2Config to nodetemplateconfig schema
2020/01/12 00:42:29 [INFO] uploading amazonec2credentialConfig to credentialconfig schema
2020/01/12 00:42:29 [INFO] uploading azureConfig to nodeconfig schema
2020/01/12 00:42:29 [INFO] uploading azureConfig to nodetemplateconfig schema
2020/01/12 00:42:29 [INFO] uploading digitaloceanConfig to nodeconfig schema
2020/01/12 00:42:29 [INFO] uploading digitaloceanConfig to nodetemplateconfig schema
2020/01/12 00:42:29 [INFO] uploading linodeConfig to nodeconfig schema
2020/01/12 00:42:29 [INFO] uploading linodeConfig to nodetemplateconfig schema
2020/01/12 00:42:29 [INFO] uploading vmwarevsphereConfig to nodeconfig schema
2020/01/12 00:42:30 [INFO] uploading vmwarevsphereConfig to nodetemplateconfig schema
2020/01/12 00:42:30 [INFO] uploading azurecredentialConfig to credentialconfig schema
2020/01/12 00:42:30 [INFO] Catalog sync done. 3 templates created, 0 templates updated, 0 templates deleted
2020/01/12 00:42:30 [INFO] uploading digitaloceancredentialConfig to credentialconfig schema
2020/01/12 00:42:30 [INFO] uploading linodecredentialConfig to credentialconfig schema
2020/01/12 00:42:30 [INFO] uploading vmwarevspherecredentialConfig to credentialconfig schema
2020-01-12 00:42:30.350787 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/nodedrivers/azure\" " with result "range_response_count:1 size:1196" took too long (107.613613ms) to execute
2020-01-12 00:42:30.351322 W | etcdserver: read-only range request "key:\"/registry/management.cattle.io/nodedrivers/packet\" " with result "range_response_count:1 size:1050" took too long (113.38207ms) to execute
2020/01/12 00:42:30 [INFO] driverMetadata: refresh data
2020/01/12 00:42:30 [INFO] Updating global catalog library
2020/01/12 00:42:31 [INFO] Updating global catalog system-library
2020/01/12 00:42:32 [INFO] Catalog sync done. 2 templates created, 3 templates updated, 0 templates deleted
2020/01/12 00:42:33 [INFO] driverMetadata initialized successfully
2020/01/12 00:42:33 [INFO] kontainerdriver azurekubernetesservice listening on address 127.0.0.1:44879
2020/01/12 00:42:33 [INFO] kontainerdriver azurekubernetesservice stopped
2020/01/12 00:42:33 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:35439
2020/01/12 00:42:33 [INFO] kontainerdriver googlekubernetesengine stopped
2020/01/12 00:42:33 [INFO] update kontainerdriver azurekubernetesservice
2020/01/12 00:42:33 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:38473
2020/01/12 00:42:33 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/01/12 00:42:33 [ERROR] KontainerDriverController googlekubernetesengine [mgmt-kontainer-driver-lifecycle] failed with : dynamicschemas.management.cattle.io "cluster" already exists
2020/01/12 00:42:33 [INFO] update kontainerdriver googlekubernetesengine
2020/01/12 00:42:34 [INFO] update kontainerdriver amazonelasticcontainerservice
2020-01-12 00:42:35.366431 W | wal: sync duration of 1.031103149s, expected less than 1s
2020-01-12 00:42:37.198575 W | wal: sync duration of 1.831834939s, expected less than 1s
2020-01-12 00:42:37.198893 W | etcdserver: request "header:<ID:7587843642380703377 > txn:<compare:<target:MOD key:\"/registry/management.cattle.io/catalogtemplateversions/cattle-global-data/library-istio-1.0.1\" mod_revision:0 > success:<request_put:<key:\"/registry/management.cattle.io/catalogtemplateversions/cattle-global-data/library-istio-1.0.1\" value_size:628 >> failure:<>>" with result "size:16" took too long (1.832087446s) to execute
2020-01-12 00:42:37.199179 W | etcdserver: failed to revoke 694d6f97334ed225 ("lease not found")
2020-01-12 00:42:37.199298 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:536" took too long (2.259640191s) to execute
2020-01-12 00:42:37.201187 W | etcdserver: failed to revoke 694d6f97334ed225 ("lease not found")
2020-01-12 00:42:37.202062 W | etcdserver: failed to revoke 694d6f97334ed225 ("lease not found")
2020-01-12 00:42:37.202217 W | etcdserver: failed to revoke 694d6f97334ed225 ("lease not found")
2020-01-12 00:42:37.210450 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:468" took too long (2.146929146s) to execute
2020-01-12 00:42:37.210580 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:499" took too long (1.557747338s) to execute
2020-01-12 00:42:37.210747 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:554" took too long (2.146877544s) to execute
2020/01/12 00:42:38 [INFO] kontainerdriver azurekubernetesservice listening on address 127.0.0.1:38361
2020/01/12 00:42:38 [INFO] kontainerdriver azurekubernetesservice stopped
2020/01/12 00:42:38 [INFO] dynamic schema for kontainerdriver azurekubernetesservice updating
2020/01/12 00:42:38 [INFO] Catalog sync done. 43 templates created, 0 templates updated, 0 templates deleted
2020/01/12 00:42:38 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:46391
2020/01/12 00:42:38 [INFO] kontainerdriver googlekubernetesengine stopped
2020/01/12 00:42:38 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
2020/01/12 00:42:39 [INFO] update kontainerdriver googlekubernetesengine
2020/01/12 00:42:39 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:33969
2020/01/12 00:42:39 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/01/12 00:42:39 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
2020/01/12 00:42:39 [INFO] update kontainerdriver amazonelasticcontainerservice
2020/01/12 00:42:44 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:46259
2020/01/12 00:42:44 [INFO] kontainerdriver googlekubernetesengine stopped
2020/01/12 00:42:44 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
2020/01/12 00:42:44 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:46289
2020/01/12 00:42:44 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/01/12 00:42:44 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
2020/01/12 00:42:44 [INFO] update kontainerdriver googlekubernetesengine
2020/01/12 00:42:44 [INFO] update kontainerdriver amazonelasticcontainerservice
2020/01/12 00:42:49 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:36891
2020/01/12 00:42:49 [INFO] kontainerdriver googlekubernetesengine stopped
2020/01/12 00:42:49 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
2020/01/12 00:42:49 [INFO] kontainerdriver amazonelasticcontainerservice listening on address 127.0.0.1:35237
2020/01/12 00:42:49 [INFO] kontainerdriver amazonelasticcontainerservice stopped
2020/01/12 00:42:49 [INFO] dynamic schema for kontainerdriver amazonelasticcontainerservice updating
2020/01/12 00:42:49 [INFO] update kontainerdriver googlekubernetesengine
2020/01/12 00:42:54 [INFO] kontainerdriver googlekubernetesengine listening on address 127.0.0.1:38915
2020/01/12 00:42:54 [INFO] kontainerdriver googlekubernetesengine stopped
2020/01/12 00:42:54 [INFO] dynamic schema for kontainerdriver googlekubernetesengine updating
2020-01-12 00:42:55.526402 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-controller-manager\" " with result "range_response_count:1 size:554" took too long (167.057363ms) to execute
2020-01-12 00:42:55.527020 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/cattle-controllers\" " with result "range_response_count:1 size:499" took too long (130.147877ms) to execute
2020-01-12 00:42:55.527473 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/kube-scheduler\" " with result "range_response_count:1 size:536" took too long (259.421431ms) to execute
2020-01-12 00:42:56.573775 W | etcdserver: request "header:<ID:7587843642380703594 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/k3s\" mod_revision:1316 > success:<request_put:<key:\"/registry/configmaps/kube-system/k3s\" value_size:410 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/k3s\" > >>" with result "size:16" took too long (706.611072ms) to execute
E0112 00:42:56.626705      28 resource_quota_controller.go:437] failed to sync resource monitors: [couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterregistrationtokens": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterregistrationtokens", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=notifiers": unable to monitor quota for resource "management.cattle.io/v3, Resource=notifiers", couldn't start monitor for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=podsecuritypolicytemplateprojectbindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplates", couldn't start monitor for resource "helm.cattle.io/v1, Resource=helmcharts": unable to monitor quota for resource "helm.cattle.io/v1, Resource=helmcharts", couldn't start monitor for resource "management.cattle.io/v3, Resource=catalogtemplateversions": unable to monitor quota for resource "management.cattle.io/v3, Resource=catalogtemplateversions", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnsproviders": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnsproviders", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectnetworkpolicies": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectnetworkpolicies", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecoderepositories": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecoderepositories", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelinesettings": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelinesettings", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertrules", couldn't start monitor for resource "project.cattle.io/v3, Resource=apprevisions": unable to monitor quota for resource "project.cattle.io/v3, Resource=apprevisions", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelineexecutions": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelineexecutions", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8sserviceoptions", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapps": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapps", couldn't start monitor for resource "project.cattle.io/v3, Resource=pipelines": unable to monitor quota for resource "project.cattle.io/v3, Resource=pipelines", couldn't start monitor for resource "management.cattle.io/v3, Resource=multiclusterapprevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=multiclusterapprevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=monitormetrics": unable to monitor quota for resource "management.cattle.io/v3, Resource=monitormetrics", couldn't start monitor for resource "management.cattle.io/v3, Resource=preferences": unable to monitor quota for resource "management.cattle.io/v3, Resource=preferences", couldn't start monitor for resource "management.cattle.io/v3, Resource=etcdbackups": unable to monitor quota for resource "management.cattle.io/v3, Resource=etcdbackups", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=listenerconfigs": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=listenerconfigs", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralerts", couldn't start monitor for resource "management.cattle.io/v3, Resource=projects": unable to monitor quota for resource "management.cattle.io/v3, Resource=projects", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodepools": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodepools", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodecredentials": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodecredentials", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectmonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectmonitorgraphs", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustermonitorgraphs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustermonitorgraphs", couldn't start monitor for resource "project.cattle.io/v3, Resource=apps": unable to monitor quota for resource "project.cattle.io/v3, Resource=apps", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectalerts": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectalerts", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkeaddons": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkeaddons", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertgroups": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertgroups", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustertemplaterevisions": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustertemplaterevisions", couldn't start monitor for resource "management.cattle.io/v3, Resource=rkek8ssystemimages": unable to monitor quota for resource "management.cattle.io/v3, Resource=rkek8ssystemimages", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusteralertrules": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusteralertrules", couldn't start monitor for resource "management.cattle.io/v3, Resource=globaldnses": unable to monitor quota for resource "management.cattle.io/v3, Resource=globaldnses", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodetemplates": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodetemplates", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectloggings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectloggings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clusterscans": unable to monitor quota for resource "management.cattle.io/v3, Resource=clusterscans", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectroletemplatebindings": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectroletemplatebindings", couldn't start monitor for resource "management.cattle.io/v3, Resource=clustercatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=clustercatalogs", couldn't start monitor for resource "management.cattle.io/v3, Resource=projectcatalogs": unable to monitor quota for resource "management.cattle.io/v3, Resource=projectcatalogs", couldn't start monitor for resource "k3s.cattle.io/v1, Resource=addons": unable to monitor quota for resource "k3s.cattle.io/v1, Resource=addons", couldn't start monitor for resource "management.cattle.io/v3, Resource=nodes": unable to monitor quota for resource "management.cattle.io/v3, Resource=nodes", couldn't start monitor for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs": unable to monitor quota for resource "project.cattle.io/v3, Resource=sourcecodeproviderconfigs"]
2020/01/12 00:43:39 [INFO] Catalog-cache removed 2 entries from disk
Dmitry1987 commented 4 years ago

image

lol sorry I just can't hold it 🤣 ... I'm back after my failed Rancher POC in 2016-2017 where we all had a ton of issues (we - are the K8s slack channel community who all tried Rancher/Kops/DCOS/Swarm/Nomad at the time, trying to pick a vendor for our company prod environments), to give it another chance, just to get it fail to load during "Quick start" official documentation steps? 🤣 It gives a real "Great success!" impression (c) Borat.

niusmallnan commented 4 years ago

It worked for me, what's your problem? Currently, you are using Rancher2 not Rancher1.

Dmitry1987 commented 4 years ago

@niusmallnan was this product built to work for you, or others as well? :)

The problem was:

  1. I follow the official documentation steps to launch RancherOS on Azure.
  2. I run it, and then proceed to the Quick start steps in the same docs.
  3. I use the 'docker run' command with whatever is listed right there in the documentation.
  4. It throws a bunch of errors in log as seen above. And Rancher doesn't load.

You can think of it as a bug report. You were unable to reproduce, I understand, but it doesn't mean it didn't happen :)

niusmallnan commented 4 years ago

@Dmitry1987 I run RancherOS v1.5.4 on Azure, and run Rancher2 on the RancherOS instance. I can access the UI via 80 and 443 port.

Apart from these error logs, can you show the specific phenomenon??

Dmitry1987 commented 4 years ago

@niusmallnan you mean this one? image

The thing just didn't run, what can I do... but nevermind, I already am trying it on ubuntu, it just didn't run on the RancherOS for some reason. I'm closing the issue.