Closed citananda closed 4 years ago
Can you check linkerd logs? It is possible that you don't have sufficient privileges to launch those containers.
I moved the pod security policy of the project from unrestricted to none.
For each container I start here are the logs
rio --namespace linkerd logs -a
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › destination
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-proxy
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › public-api
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-init
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: linkerd-proxy
: container "linkerd-proxy" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: linkerd-init
: container "linkerd-init" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: public-api
: container "public-api" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: destination
: container "destination" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › destination
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-proxy
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › public-api
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-init
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: public-api
: container "public-api" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: linkerd-init
: container "linkerd-init" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: destination
: container "destination" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: linkerd-proxy
: container "linkerd-proxy" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › destination
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-proxy
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › public-api
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: public-api
: container "public-api" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: destination
: container "destination" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
Error opening stream to linkerd/linkerd-controller-78f7b7ff7c-qwhpk: linkerd-proxy
: container "linkerd-proxy" in pod "linkerd-controller-78f7b7ff7c-qwhpk" is waiting to start: PodInitializing
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › destination
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › public-api
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:27Z" level=info msg="running version stable-2.6.1"
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:28Z" level=info msg="Using cluster domain: cluster.local"
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:28Z" level=info msg="waiting for caches to sync"
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:28Z" level=info msg="caches synced"
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:28Z" level=info msg="starting admin server on :9995"
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:36:28Z" level=info msg="starting HTTP server on :8085"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:36:30Z" level=info msg="running version stable-2.6.1"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:36:30Z" level=info msg="waiting for caches to sync"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:36:30Z" level=info msg="caches synced"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:36:30Z" level=info msg="starting admin server on :9996"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:36:30Z" level=info msg="starting gRPC server on :8086"
- linkerd linkerd-controller-78f7b7ff7c-qwhpk
+ linkerd linkerd-controller-78f7b7ff7c-qwhpk › linkerd-proxy
linkerd-controller-78f7b7ff7c-qwhpk linkerd-proxy time="2020-04-17T07:36:35Z" level=info msg="running version stable-2.6.1"
linkerd-controller-78f7b7ff7c-qwhpk linkerd-proxy time="2020-04-17T07:36:35Z" level=info msg="Using with pre-existing key: /var/run/linkerd/identity/end-entity/key.p8"
linkerd-controller-78f7b7ff7c-qwhpk linkerd-proxy time="2020-04-17T07:36:35Z" level=info msg="Using with pre-existing CSR: /var/run/linkerd/identity/end-entity/key.p8"
linkerd-controller-78f7b7ff7c-qwhpk linkerd-proxy Invalid configuration: invalid environment variable
linkerd-controller-78f7b7ff7c-qwhpk public-api time="2020-04-17T07:37:05Z" level=info msg="shutting down HTTP server on :8085"
linkerd-controller-78f7b7ff7c-qwhpk destination time="2020-04-17T07:37:08Z" level=info msg="shutting down gRPC server on :8086"
I don't know if this message is related to the bug linkerd-proxy Invalid configuration: invalid environment variable
Each time, it is shutting down HTTP server on :8085 and gRPC server on :8086 after starting them.
I find something maybe interresting: kubectl get apiservice
Everything is fine execpt this line:
v1alpha1.tap.linkerd.io linkerd/linkerd-tap False (MissingEndpoints) 7d16h
When I go deeper: kubectl get apiservice v1alpha1.tap.linkerd.io -o yaml
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apiregistration.k8s.io/v1","kind":"APIService","metadata":{"annotations":{},"labels":{"linkerd.io/control-plane-component":"tap","linkerd.io/control-plane-ns":"linkerd"},"name":"v1alpha1.tap.linkerd.io"},"spec":{"caBundle":"XXX","group":"tap.linkerd.io","groupPriorityMinimum":1000,"service":{"name":"linkerd-tap","namespace":"linkerd"},"version":"v1alpha1","versionPriority":100}}
creationTimestamp: "2020-04-09T17:32:06Z"
labels:
linkerd.io/control-plane-component: tap
linkerd.io/control-plane-ns: linkerd
name: v1alpha1.tap.linkerd.io
resourceVersion: "10124425"
selfLink: /apis/apiregistration.k8s.io/v1/apiservices/v1alpha1.tap.linkerd.io
uid: 9e9c1478-87fc-4787-b587-861261e4bf66
spec:
caBundle: XXX
group: tap.linkerd.io
groupPriorityMinimum: 1000
service:
name: linkerd-tap
namespace: linkerd
port: 443
version: v1alpha1
versionPriority: 100
status:
conditions:
- lastTransitionTime: "2020-04-16T08:27:33Z"
message: endpoints for service/linkerd-tap in "linkerd" have no addresses with
port name "apiserver"
reason: MissingEndpoints
status: "False"
type: Available
Finally I unistalled and installed again Rio and now things are better but not fully working.
kubectl get apiservice
NAME SERVICE AVAILABLE AGE
v1. Local True 25d
v1.admin.rio.cattle.io Local True 69m
v1.admissionregistration.k8s.io Local True 25d
v1.apiextensions.k8s.io Local True 25d
v1.apps Local True 25d
v1.authentication.k8s.io Local True 25d
v1.authorization.k8s.io Local True 25d
v1.autoscaling Local True 25d
v1.batch Local True 25d
v1.coordination.k8s.io Local True 25d
v1.crd.projectcalico.org Local True 3h35m
v1.enterprise.gloo.solo.io Local True 68m
v1.gateway.solo.io Local True 68m
v1.gitwatcher.cattle.io Local True 69m
v1.gloo.solo.io Local True 68m
v1.monitoring.coreos.com Local True 13d
v1.networking.k8s.io Local True 25d
v1.rbac.authorization.k8s.io Local True 25d
v1.rio.cattle.io Local True 69m
v1.scheduling.k8s.io Local True 25d
v1.storage.k8s.io Local True 25d
v1alpha1.authentication.istio.io Local True 13d
v1alpha1.caching.internal.knative.dev Local True 67m
v1alpha1.linkerd.io Local True 68m
v1alpha1.rbac.istio.io Local True 3h35m
v1alpha1.split.smi-spec.io Local True 68m
v1alpha1.tap.linkerd.io linkerd/linkerd-tap True 68m
v1alpha1.tekton.dev Local True 67m
v1alpha2.acme.cert-manager.io Local True 68m
v1alpha2.cert-manager.io Local True 68m
v1alpha2.config.istio.io Local True 30h
v1alpha2.linkerd.io Local True 68m
v1alpha3.networking.istio.io Local True 14d
v1beta1.admissionregistration.k8s.io Local True 25d
v1beta1.apiextensions.k8s.io Local True 25d
v1beta1.authentication.k8s.io Local True 25d
v1beta1.authorization.k8s.io Local True 25d
v1beta1.batch Local True 25d
v1beta1.certificates.k8s.io Local True 25d
v1beta1.coordination.k8s.io Local True 25d
v1beta1.discovery.k8s.io Local True 25d
v1beta1.events.k8s.io Local True 25d
v1beta1.extensions Local True 25d
v1beta1.metrics.k8s.io kube-system/metrics-server True 25d
v1beta1.networking.k8s.io Local True 25d
v1beta1.node.k8s.io Local True 25d
v1beta1.policy Local True 25d
v1beta1.rbac.authorization.k8s.io Local True 25d
v1beta1.scheduling.k8s.io Local True 25d
v1beta1.security.istio.io Local True 30h
v1beta1.storage.k8s.io Local True 25d
v2beta1.autoscaling Local True 25d
v2beta2.autoscaling Local True 25d
v3.cluster.cattle.io Local True 13d
v3.management.cattle.io Local True 30h
Now, when I run rio run -name my-app-cicd --build-clone-secret gitcredential-ssh --build-branch master --build-dockerfile Dockerfile.prod --build-image-name my-project/my-app:2.10 ssh://user@my.gitlab:my-project/my-app.git
Here is the result:
+ my-project-cicd my-app-cicd-v0794bm-62004-4c11d-pod-fd25a2 › step-build-and-push
+ my-project-cicd my-app-cicd-v0794bm-62004-4c11d-pod-fd25a2 › step-git-source-source-fs7mq
my-app-cicd-v0794bm-62004-4c11d-pod-fd25a2 step-git-source-source-fs7mq {"level":"warn","ts":1587133986.679937,"logger":"fallback-logger","caller":"logging/config.go:69","msg":"Fetch GitHub commit ID from kodata failed: \"ref: refs/heads/master\" is not a valid GitHub commit ID"}
my-app-cicd-v0794bm-62004-4c11d-pod-fd25a2 step-git-source-source-fs7mq {"level":"info","ts":1587133987.196259,"logger":"fallback-logger","caller":"git/git.go:103","msg":"Successfully cloned ssh://user@my.gitlab:my-project/my-app.git @ 2183508b9dfc47f71b9ccde82256b44b7b6f3bb9 in path /workspace/source"}
my-app-cicd-v0794bm-62004-4c11d-pod-fd25a2 step-build-and-push error: failed to get status: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: error while dialing: dial tcp 10.43.144.230:8080: i/o timeout"
Before this problem, this command was working
I close this issue because I uninstall / reinstall rio
Describe the bug All containers of the namespace linkerd are in status CrashLoopBackOff
To Reproduce I don't know when exactly it happens, but everything (except that) is fine on my cluster
Expected behavior Status Running
Kubernetes version & type (GKE, on-prem):
kubectl version
Type: Rio version:
rio info
Additional context
rio system logs
output:kubectl get pod -n linkerd