ovh / cds

Enterprise-Grade Continuous Delivery & DevOps Automation Open Source Platform
https://ovh.github.io/cds/
BSD 3-Clause "New" or "Revised" License
4.56k stars 419 forks source link

CA certificate error in Kubernetes integration with CDS #5360

Open shansclensky opened 4 years ago

shansclensky commented 4 years ago

Hi All,

I am trying to integrate Kubernetes with CDS and i am facing with authentication issue . Below are the steps i have followed the below steps. I got the ca certificate value from location "/var/lib/rancher/k3s/server/tls". So I have added only from client side , I have doubts because they have specified that CA bundle so is there any 2 different certificates as a part of authentication just want to confirm this. I have also attached the environment variable "SSL_CERT_DIR" in the bashrc file and path i have set as "/etc/ssl/certs" but it did not help. I wanted to is there any specific configuration i am missing out apart from the steps i have mentioned or am i doing something wrong.

Integration configuration: image

error message: image

CA certificate picking location: image

nevetS commented 3 years ago

The kubernetes api server ca.crt is automounted in pods at /var/run/secrets/kubernetes.io/serviceaccount/ca.crt

I don't know rancher, so I'm not sure if the same certificate lives in /var/lib/rancher/k3s/server/tls

I faced a similar problem recently with a kubernetes hatchery. I used a shell script to create a kubeconfig file and referenced that kubeconfig in the container running the hatchery. Not the same issue, but maybe it can help you think about it. By looking at what you posted, I think you need the server-ca.crt as that's the certificate that seems to be what the api-server is presenting to cds when it executes commands.

here's my shell script:

#!/usr/bin/env sh

CAPEM=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# CAPEMDATA=$(grep -v CERTIFICATE /var/run/secrets/kubernetes.io/serviceaccount/ca.crt | tr -d '\n')
DIR_APP=${CDS_DIR_APP:-/app}
TMPBIN=${CDS_BIN:-cds-engine-linux-amd64}
CDSBIN="${DIR_APP}/${TMPBIN}"
CDSCONF=${CDS_CONF:-/app/conf/conf.toml}
CDSWOKERNAMESPACE=${CDS_WORKER_NAMESPACE:-cds}
URL_KHATCHERY=${CDS_URL_KHATCHERY:-http://cds-khatchery:8086}
KHATCHERY_PORT=${CDS_KHATCHERY_PORT:-8086}
APP_USER=${CDS_USER:-cds}
KUBECONFIG=/home/"${APP_USER}"/.kube/config
# CREATE kubeconfig from service account
kubectl config set-cluster thiscluster \
    --server=https://kubernetes.default \
    --certificate-authority="${CAPEM}"
# kubectl config set clusters.thiscluster.certificate-authority-data "${CAPEMDATA}"
kubectl config set-credentials "<serviceaccountuser>" --token="${TOKEN}"
kubectl config set-context thiscontext --cluster=thiscluster
kubectl config set-context thiscontext --user="${KB}"
kubectl config use-context thiscontext
"${CDSBIN}" config edit "${CDSCONF}" --output "${CDSCONF}" \
        hatchery.kubernetes.kubernetesMasterURL="https://kubernetes.default" \
        hatchery.kubernetes.namespace="${CDSWOKERNAMESPACE}" \
        hatchery.kubernetes.commonConfiguration.url="${URL_KHATCHERY}" \
        hatchery.kubernetes.commonConfiguration.http.port="${KHATCHERY_PORT}" \
        hatchery.kubernetes.commonConfiguration.http.url="http://cds-hatchery:8086" \
        hatchery.kubernetes.kubernetesConfigFile="${KUBECONFIG}"
nevetS commented 3 years ago

And a quick follow up, don't forget to associate permissions with the account you are using in the namespace you expect to deploy to. That'll be the next error (was for me anyways :D)