Closed gtirloni closed 4 years ago
I have the same problem with K3S on a cloud server. k3s version v1.19.1+k3s1 (b66760fc)
After some further research I think go 1.5 causes the issue: https://github.com/golang/go/issues/39568
There was some discussion about CN, SAN and RFCs. The reality is, that kubernetes 1.19 uses go 1.5 and that this version is no longer supporting the deprecated CN field. This is a problem for self signed certificates, that do not use SAN.
I found a referenced issue about linkerd: https://github.com/linkerd/linkerd2/issues/4918
So, the solution (until this get's fixed) is to use kubernetes 1.18 (it's working with v1.18.9+k3s1 (630bebf9) and v0.8.0-rc2 (04372696) ) just tested.
Perhaps we could get an update for linkerd and (if required certmanager and all other components). Until then we cannot use the new etcd cluster feature in k3s (fun fact - even etcd hat a problem with go 1.5: https://github.com/NixOS/nixpkgs/issues/59364 ).
Hi, as @kuetemeier says, I also think it's cert issue due to go1.15. I did below and saw same issue.
minikube start --kubernetes-version=v1.19.1
rio install
rio run -p 80:8080 https://github.com/rancher/rio-demo
$ kubectl get secret -n rio-system rio-api-validator -o go-template='{{index .data "tls.crt"}}' | base64 -d > tls.crt
$ openssl x509 -in tls.crt -noout -text
In above output, there is no SAN in X509v3 extensions.
X509v3 extensions:
X509v3 Key Usage: critical
Certificate Sign, CRL Sign
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Basic Constraints: critical
CA:TRUE
This issue will be fixed by creating a new cert file?
This is fixed in v0.8.0-rc3
close this ticket? solution of this issue is using rio v0.8.0-rc2 with k8s v18.x or upgrading to v0.8.0-rc3.
At this time This should be fixed in https://github.com/rancher/rio/releases/tag/v0.8.0. If you still see the issue, please re-open.
still having this in 0.8.0
@boredland Can you describe your issue if you are seeing this in v0.8.0?
sure:
rio install
using rio v0.8.0 on DOKS 1.19rio dashboard
- fails:
time="2020-11-30T17:11:05Z" level=info msg="Rancher version dev is starting"
time="2020-11-30T17:11:05Z" level=info msg="Rancher arguments {Config:{Kubeconfig: UserKubeconfig: HTTPSPort:443 HTTPPort:80 Namespace: WebhookConfig:{WebhookAuthentication:false WebhookKubeconfig: WebhookURL: CacheTTLSeconds:0}} AuditLogPath:/var/log/auditlog/rancher-api-audit.log AuditLogMaxage:10 AuditLogMaxsize:100 AuditLogMaxbackup:10 AuditLevel:0 Features:}"
time="2020-11-30T17:11:05Z" level=info msg="Starting API controllers"
I1130 17:11:05.685587 7 leaderelection.go:241] attempting to acquire leader lease kube-system/cattle-controllers...
I1130 17:11:05.685789 7 leaderelection.go:241] attempting to acquire leader lease kube-system/cloud-controllers...
time="2020-11-30T17:11:06Z" level=info msg="Starting apiregistration.k8s.io/v1, Kind=APIService controller"
time="2020-11-30T17:11:06Z" level=info msg="Refreshing all schemas"
time="2020-11-30T17:11:06Z" level=info msg="Starting apiextensions.k8s.io/v1beta1, Kind=CustomResourceDefinition controller"
time="2020-11-30T17:11:07Z" level=info msg="Refreshing all schemas"
time="2020-11-30T17:11:07Z" level=fatal msg="unable to retrieve the complete list of server APIs: tap.linkerd.io/v1alpha1: the server is currently unable to handle the request"
rio up
:
FATA[0005] failed to create dev/**** rio.cattle.io/v1, Kind=Service for dev/****: Internal error occurred: failed calling webhook "api-validator.rio.io": Post "https://rio-api-validator.rio-system.svc:443/?timeout=30s": EOF
@boredland That looks like a different issue. Can you check if linkerd is installed properly in your setup? Looks like this has caused rio-controller to crash(which also served as webhook server).
How do I check that?
There is a linkerd-install pod. You should be able to check the logs of that pod.
perhaps I need to upgrade linkerd?
service/linkerd-identity created
deployment.apps/linkerd-identity created
service/linkerd-controller-api created
deployment.apps/linkerd-controller created
service/linkerd-dst created
deployment.apps/linkerd-destination created
cronjob.batch/linkerd-heartbeat created
service/linkerd-web created
deployment.apps/linkerd-web created
configmap/linkerd-prometheus-config created
service/linkerd-prometheus created
deployment.apps/linkerd-prometheus created
deployment.apps/linkerd-proxy-injector created
service/linkerd-proxy-injector created
service/linkerd-sp-validator created
deployment.apps/linkerd-sp-validator created
service/linkerd-tap created
deployment.apps/linkerd-tap created
configmap/linkerd-config-addons created
serviceaccount/linkerd-grafana created
configmap/linkerd-grafana-config created
service/linkerd-grafana created
deployment.apps/linkerd-grafana created
+ linkerd check
kubernetes-api
--------------
√ can initialize the client
√ can query the Kubernetes API
kubernetes-version
------------------
√ is running the minimum Kubernetes API version
√ is running the minimum kubectl version
linkerd-existence
-----------------
√ 'linkerd-config' config map exists
√ heartbeat ServiceAccount exist
√ control plane replica sets are ready
√ no unschedulable pods
√ controller pod is running
√ can initialize the client
√ can query the control plane API
linkerd-config
--------------
√ control plane Namespace exists
√ control plane ClusterRoles exist
√ control plane ClusterRoleBindings exist
√ control plane ServiceAccounts exist
√ control plane CustomResourceDefinitions exist
√ control plane MutatingWebhookConfigurations exist
√ control plane ValidatingWebhookConfigurations exist
√ control plane PodSecurityPolicies exist
linkerd-identity
----------------
√ certificate config is valid
√ trust anchors are using supported crypto algorithm
√ trust anchors are within their validity period
√ trust anchors are valid for at least 60 days
√ issuer cert is using supported crypto algorithm
√ issuer cert is within its validity period
√ issuer cert is valid for at least 60 days
√ issuer cert is issued by the trust anchor
linkerd-api
-----------
√ control plane pods are ready
√ control plane self-check
√ [kubernetes] control plane can talk to Kubernetes
√ [prometheus] control plane can talk to Prometheus
√ tap api service is running
linkerd-version
---------------
√ can determine the latest version
‼ cli is up-to-date
is running version 2.8.1 but the latest stable version is 2.9.0
see https://linkerd.io/checks/#l5d-version-cli for hints
control-plane-version
---------------------
‼ control plane is up-to-date
is running version 2.8.1 but the latest stable version is 2.9.0
see https://linkerd.io/checks/#l5d-version-control for hints
√ control plane and cli versions match
linkerd-addons
--------------
√ 'linkerd-config-addons' config map exists
linkerd-grafana
---------------
√ grafana add-on service account exists
√ grafana add-on config map exists
√ grafana pod is running
Status check results are √
+ [[ 0 -ne 0 ]]
After upgrading linkerd I'm not facing this error anymore.
Describe the bug
The documentation says rio works with minikube but we get this error running the
rio-demo
app:To Reproduce
Expected behavior
rio-demo is executed without errors.
Kubernetes version & type (GKE, on-prem):
kubectl version
Type: Rio version:
rio info
Additional context
rio system logs
output: