Open burkempers opened 4 years ago
Hi @burkempers, please have a look at the documentation
Yes I also tried adding the ca file with that documentation, and that didnt work either.
Can you please share a little more details, please?
The certificate that you configure in ArgoCD with above documentation is actually passed as --ca-file
parameter to Helm when the property "Server name" that you specified when adding the cert is matching the host name of the repository you are trying to access. The subjects of the certificate must also match the hostname you are trying to connect.
Can you please paste the output of argocd cert list
and the URL of the repository you are trying to add?
argocd:
repositories:
- type: helm
name: helm-library
repository: http://my.host/helm3-library/charts
certificates:
- serverName: my.host
type: https rsa
certInfo: hash
index.yaml:
apiVersion: v1
entries:
helm-library:
- description: 'helm library'
name: helm-library
sources:
- https://my.nexus.host/repository/helm/
urls:
- https://my.nexus.host/repository/helm/helm-library-0.1.0.tgz
version: 0.1.0
Chart.yaml:
dependencies:
- name: helm-library
version: 0.1.0
repository: http://my.host/helm3-library/charts
Unable to create application: application spec is invalid: InvalidSpecError: Unable to generate manifests in .: rpc error: code = Unknown desc = helm dependency build
failed exit status 1: Error: could not download https://my.nexus.host/repository/helm/helm-library-0.1.0.tgz: Get https://my.nexus.host/repository/helm/helm-library-0.1.0.tgz: x509: certificate signed by unknown authority
I think you would need the CA cert used to issue the certificate for my.nexus.host
configured as well. When you configure the certificate for my.host
, you can actually configure multiple PEMs combined.
So, add both certificates concatenated (for my.host
and for my.nexus.host
) within the same entry for my.host
Can you try that please?
Both my.host
and my.nexus.host
use the same ca file. I added a my.nexus.host
certificate with the same ca file, and i still get the x509 error.
argocd:
certificates:
- serverName: my.host
type: https rsa
certInfo: hash
- serverName: my.nexus.host
type: https rsa
certInfo: hash
Thanks for the info. I think this might be a bug then, I will validate in a local test environment.
FWIW: We had the same problem today (and wasted a lot of time) during an Argo CD 1.8.1 test installation:
We configured a private TLS root CA certificate in the Argo CD Web UI and then added a private helm repository to Argo CD (its server uses a TLS cert issued by this root CA).
However, Argo CD wouldn't use the root CA cert to verify the helm repo URL when we tried to create a helm application from this helm repo: " x509: certificate signed by unknown authority" errors all over the place....
The dirty hotfix to get it working at all with validation was to add the following env section to (if I remember correctly) both the argocd-server
and the argocd-repo-server
deployments. It forces the go code to use our root CA cert.
env:
- name: SSL_CERT_FILE
value: "/app/config/tls/<ROOTCERTNAME>"
However, this shows that our root CA cert was configured correctly in Argo CD and also provisioned correctly into the container - it just wasn't used by the Argo CD processes.
Thinking about it, using SSL_CERT_DIR
probably would have been a nicer hotfix (not tested!):
env:
- name: SSL_CERT_DIR
value: "/app/config/tls/"
@knweiss Thank you, I wasted a little bit less time looking for a solution ;)
I can confirm that SSL_CERT_DIR
is working.
FWIW: We had the same problem today (and wasted a lot of time) during an Argo CD 1.8.1 test installation:
We configured a private TLS root CA certificate in the Argo CD Web UI and then added a private helm repository to Argo CD (its server uses a TLS cert issued by this root CA).
However, Argo CD wouldn't use the root CA cert to verify the helm repo URL when we tried to create a helm application from this helm repo: " x509: certificate signed by unknown authority" errors all over the place....
The dirty hotfix to get it working at all with validation was to add the following env section to (if I remember correctly) both the
argocd-server
and theargocd-repo-server
deployments. It forces the go code to use our root CA cert.env: - name: SSL_CERT_FILE value: "/app/config/tls/<ROOTCERTNAME>"
However, this shows that our root CA cert was configured correctly in Argo CD and also provisioned correctly into the container - it just wasn't used by the Argo CD processes.
Thinking about it, using
SSL_CERT_DIR
probably would have been a nicer hotfix (not tested!):env: - name: SSL_CERT_DIR value: "/app/config/tls/"
This solution led me down the right path. I will give some details here for anyone with the same issue. In my case, I was installing ArgoCD using the community Helm chart. During the installation, I specified configs.tlsCerts: {}
in the values file. I then added a certificate through the ArgoCD UI which caused an update to the argocd-tls-certs-cm
ConfigMap. However, since I had specified the tlsCerts as empty in the Helm values file, the ConfigMap had not been mounted into the argocd-server
or argocd-repo-server
pods. I manually edited the Deployments to mount the ConfigMap in and the certificate error disappeared, as expected.
Same here. ArgoCD 2.0.1. The certs are mounted on the pods but it only worked after setting the SSL_CERT_DIR
env var.
Couldn't find a way to add an env var when using ArgoCD Operator (https://argocd-operator.readthedocs.io/en/latest/reference/argocd/)
I'm deploying the operator with the ArgoCD
resource kind, like this:
apiVersion: argoproj.io/v1alpha1
kind: ArgoCD
Couldn't find a way to add an env var when using ArgoCD Operator (https://argocd-operator.readthedocs.io/en/latest/reference/argocd/)
I'm deploying the operator with the
ArgoCD
resource kind, like this:apiVersion: argoproj.io/v1alpha1 kind: ArgoCD
Couldn't find a way to add an env var when using ArgoCD Operator (https://argocd-operator.readthedocs.io/en/latest/reference/argocd/) I'm deploying the operator with the
ArgoCD
resource kind, like this:apiVersion: argoproj.io/v1alpha1 kind: ArgoCD
- 1 have the same issue
same here
After all I've added our root CAs to the argocd image, maybe it's dirty workaround, but it works:
FROM docker.io/argoproj/argocd:v2.1.2
USER 0
RUN mkdir /usr/local/share/ca-certificates/mycorp
ADD ./*.crt /usr/local/share/ca-certificates/mycorp/
RUN update-ca-certificates && apt update && apt -y install curl
USER 999
After having the same issue and digging around, it appears to be an issue with the bundled helm. https://github.com/helm/helm/issues/9826 addresses this problem and it was fixed in v3.6.3. Looks like ArgoCD v2.2.0 updates to a fixed version of helm and should resolve this.
When are you planning to fix this ? we are affected with this issue. Our internal setup is with an https and Harbor as the registry. Tried to make the repo insecure also didnt work.
Fixed 🎉 🎉 for kind: ArgoCD
! i mean using ArgoCD Operator or Red Hat Openshift GitOps Operator.
So basically, if you run k explain argocd.spec.repo
, you will find that it supports:
env
volumes
volumeMounts
And these 3 attrs are enough to implement the SSL_CERT_DIR mentioned aboveThen:
kind: ArgoCD
spec:
...
repo:
env:
- name: SSL_CERT_DIR
value: /tmp/sslcertdir
volumeMounts:
- name: ssl
mountPath: /tmp/sslcertdir
volumes:
- name: ssl
configMap:
name: user-ca-bundle
For the configmap user-ca-bundle
, i did not need to create it from scratch as I've already it in another namespace (openshift-config),.. So I just duplicate it k-n openshift-config get cm user-ca-bundle -o yaml | sed "s@openshift-config@${NAMESPACE_WHERE_ARGOCD_DEPLOYED}@g" | k apply -f-
FYI @jorioux @iyurev @aelbarkani
For the configmap
user-ca-bundle
, i did not need to create it from scratch as I've already it in another namespace (openshift-config),.. So I just duplicate itk-n openshift-config get cm user-ca-bundle -o yaml | sed "s@openshift-config@${NAMESPACE_WHERE_ARGOCD_DEPLOYED}@g" | k apply -f-
On OpenShift, simply create a ConfigMap with the following content:
apiVersion: v1
kind: ConfigMap
metadata:
name: user-ca-bundle
labels:
config.openshift.io/inject-trusted-cabundle: "true"
This way, the user-ca-bundle content from openshift-config
automatically gets injected into this ConfigMap, even merged with the system ca-bundle.
Source: https://docs.openshift.com/container-platform/4.11/operators/admin/olm-configuring-proxy-support.html#olm-inject-custom-ca_olm-configuring-proxy-support
Fixed 🎉 🎉 for
kind: ArgoCD
! i mean using ArgoCD Operator or Red Hat Openshift GitOps Operator.So basically, if you run
k explain argocd.spec.repo
, you will find that it supports:
env
volumes
volumeMounts
And these 3 attrs are enough to implement the SSL_CERT_DIR mentioned aboveThen:
kind: ArgoCD spec: ... repo: env: - name: SSL_CERT_DIR value: /tmp/sslcertdir volumeMounts: - name: ssl mountPath: /tmp/sslcertdir volumes: - name: ssl configMap: name: user-ca-bundle
For the configmap
user-ca-bundle
, i did not need to create it from scratch as I've already it in another namespace (openshift-config),.. So I just duplicate itk-n openshift-config get cm user-ca-bundle -o yaml | sed "s@openshift-config@${NAMESPACE_WHERE_ARGOCD_DEPLOYED}@g" | k apply -f-
FYI @jorioux @iyurev @aelbarkani
Hi @abdennour i am getting the x509 error in applicationSet controller pod. we are also using openshift operator to install argocd. I am not able to add env or volume mount section to application set please help
config.openshift.io/inject-trusted-cabundle
Yes @bliemli , I intentionally didn't mention that to not overwhelm audience. But it's good to be mentioned in a separated comment as you did. Also I recommend whoever operating OCP to go with DO380 course in order to be aware of all these tips.
Hi @sarsatis , May be you need to verify that your CA bundle is correct . Check : oc get proxy cluster -o yaml | grep -i trust
FYSA. I did run into this problem again when I switching to Harbor for my helm chart repository. I had added my custom CA file to ArgoCD but I was still seeing certificate errors. Found out that Harbor backed by an S3 bucket redirects requests to the S3 url to pull the chart and you need the CAs for AWS as well in ArgoCD. To solve this I just mounted /etc/pki to the ArgoCD repo deployment to pick up the aws certs from the node the pods will run on.
Hope this helps if any one else is having issues.
+1 Need a clean solution for this...
I think this has been fixed with https://github.com/argoproj/argo-cd/pull/16656. Self-signed certs + helm oci registry worked for me with Argo CD v2.10.11. I used https://argo-cd.readthedocs.io/en/stable/user-guide/private-repositories/#self-signed-untrusted-tls-certificates to add the certs into Argo CD.
Summary
When adding a helm repo in argocd you have two options to for tls certs, but with helm there is also away of passing in a ca-file ( helm repo add --ca-file ~/myCa.pem [repo url] ). Argocd should have that as another optional text field to add in.
Motivation
I am trying to use sonatype nexus for a helm library .tgz repo outside of the k8s cluster. I have a httpd pod that hosts the index.yaml file and the paths to the tgz files are nexus urls.
Doing helm commands myself i can add in a repo with the ca file, and manual helm install commands work correctly pulling the charts from nexus. I have tried to mount the ca file into the argocd pod, but that doesnt work. Also providing tls certs of that server into argocd it doesnt work either
Proposal
Add another text field option to when adding a helm repo to pass in the value of the ca file.