khuedoan / homelab

Fully automated homelab from empty disk to running services with a single command.
https://homelab.khuedoan.com
GNU General Public License v3.0
7.9k stars 705 forks source link

Failed to install ArgoCD #138

Closed danghung-dev closed 4 months ago

danghung-dev commented 4 months ago

Describe the bug

Failed to install ArgoCD

To reproduce

Steps to reproduce the behavior:

  1. make tools
  2. make configure
  3. make

Additional context

Install log:

TASK [cilium : Apply Cilium resources] **********************************************************************************************************************************************
task path: /Users/hung/Documents/work/projects/bonbon/devops/homelab/homelab/metal/roles/cilium/tasks/main.yml:23
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: hung
<127.0.0.1> EXEC /bin/sh -c 'echo ~hung && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/hung/.ansible/tmp `"&& mkdir "` echo /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295 `" && echo ansible-tmp-1707408488.783521-98365-160817154280295="` echo /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/9.1.0/libexec/lib/python3.12/site-packages/ansible_collections/kubernetes/core/plugins/modules/k8s.py
<127.0.0.1> PUT /Users/hung/.ansible/tmp/ansible-local-614131iscbyqa/tmp9cugfht6 TO /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295/AnsiballZ_k8s.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295/ /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/9.1.0/libexec/bin/python /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/hung/.ansible/tmp/ansible-tmp-1707408488.783521-98365-160817154280295/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item=ciliuml2announcementpolicy.yaml) => {
    "ansible_loop_var": "item",
    "changed": true,
    "invocation": {
        "module_args": {
            "api_key": null,
            "api_version": "v1",
            "append_hash": false,
            "apply": false,
            "ca_cert": null,
            "client_cert": null,
            "client_key": null,
            "context": null,
            "continue_on_error": false,
            "definition": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "kind": "CiliumL2AnnouncementPolicy",
                    "metadata": {
                        "name": "default"
                    },
                    "spec": {
                        "externalIPs": true,
                        "loadBalancerIPs": true
                    }
                }
            ],
            "delete_options": null,
            "force": false,
            "generate_name": null,
            "host": null,
            "impersonate_groups": null,
            "impersonate_user": null,
            "kind": null,
            "kubeconfig": null,
            "label_selectors": null,
            "merge_type": null,
            "name": null,
            "namespace": null,
            "no_proxy": null,
            "password": null,
            "persist_config": null,
            "proxy": null,
            "proxy_headers": null,
            "resource_definition": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "kind": "CiliumL2AnnouncementPolicy",
                    "metadata": {
                        "name": "default"
                    },
                    "spec": {
                        "externalIPs": true,
                        "loadBalancerIPs": true
                    }
                }
            ],
            "server_side_apply": null,
            "src": null,
            "state": "present",
            "template": null,
            "username": null,
            "validate": null,
            "validate_certs": null,
            "wait": false,
            "wait_condition": null,
            "wait_sleep": 5,
            "wait_timeout": 120
        }
    },
    "item": "ciliuml2announcementpolicy.yaml",
    "method": "create",
    "result": {
        "apiVersion": "cilium.io/v2alpha1",
        "kind": "CiliumL2AnnouncementPolicy",
        "metadata": {
            "creationTimestamp": "2024-02-08T16:08:09Z",
            "generation": 1,
            "managedFields": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:spec": {
                            ".": {},
                            "f:externalIPs": {},
                            "f:loadBalancerIPs": {}
                        }
                    },
                    "manager": "OpenAPI-Generator",
                    "operation": "Update",
                    "time": "2024-02-08T16:08:09Z"
                }
            ],
            "name": "default",
            "resourceVersion": "82313",
            "uid": "7577af92-a231-415b-86ab-6c214a9ea9d8"
        },
        "spec": {
            "externalIPs": true,
            "loadBalancerIPs": true
        }
    }
}
<127.0.0.1> EXEC /bin/sh -c 'echo ~hung && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /Users/hung/.ansible/tmp `"&& mkdir "` echo /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591 `" && echo ansible-tmp-1707408489.559798-98365-190811900672591="` echo /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591 `" ) && sleep 0'
Using module file /opt/homebrew/Cellar/ansible/9.1.0/libexec/lib/python3.12/site-packages/ansible_collections/kubernetes/core/plugins/modules/k8s.py
<127.0.0.1> PUT /Users/hung/.ansible/tmp/ansible-local-614131iscbyqa/tmplw976fmr TO /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591/AnsiballZ_k8s.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591/ /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/opt/homebrew/Cellar/ansible/9.1.0/libexec/bin/python /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591/AnsiballZ_k8s.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /Users/hung/.ansible/tmp/ansible-tmp-1707408489.559798-98365-190811900672591/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => (item=ciliumloadbalancerippool.yaml) => {
    "ansible_loop_var": "item",
    "changed": true,
    "invocation": {
        "module_args": {
            "api_key": null,
            "api_version": "v1",
            "append_hash": false,
            "apply": false,
            "ca_cert": null,
            "client_cert": null,
            "client_key": null,
            "context": null,
            "continue_on_error": false,
            "definition": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "kind": "CiliumLoadBalancerIPPool",
                    "metadata": {
                        "name": "default"
                    },
                    "spec": {
                        "cidrs": [
                            {
                                "cidr": "10.86.101.238/31"
                            }
                        ]
                    }
                }
            ],
            "delete_options": null,
            "force": false,
            "generate_name": null,
            "host": null,
            "impersonate_groups": null,
            "impersonate_user": null,
            "kind": null,
            "kubeconfig": null,
            "label_selectors": null,
            "merge_type": null,
            "name": null,
            "namespace": null,
            "no_proxy": null,
            "password": null,
            "persist_config": null,
            "proxy": null,
            "proxy_headers": null,
            "resource_definition": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "kind": "CiliumLoadBalancerIPPool",
                    "metadata": {
                        "name": "default"
                    },
                    "spec": {
                        "cidrs": [
                            {
                                "cidr": "10.86.101.238/31"
                            }
                        ]
                    }
                }
            ],
            "server_side_apply": null,
            "src": null,
            "state": "present",
            "template": null,
            "username": null,
            "validate": null,
            "validate_certs": null,
            "wait": false,
            "wait_condition": null,
            "wait_sleep": 5,
            "wait_timeout": 120
        }
    },
    "item": "ciliumloadbalancerippool.yaml",
    "method": "create",
    "result": {
        "apiVersion": "cilium.io/v2alpha1",
        "kind": "CiliumLoadBalancerIPPool",
        "metadata": {
            "creationTimestamp": "2024-02-08T16:08:10Z",
            "generation": 1,
            "managedFields": [
                {
                    "apiVersion": "cilium.io/v2alpha1",
                    "fieldsType": "FieldsV1",
                    "fieldsV1": {
                        "f:spec": {
                            ".": {},
                            "f:cidrs": {},
                            "f:disabled": {}
                        }
                    },
                    "manager": "OpenAPI-Generator",
                    "operation": "Update",
                    "time": "2024-02-08T16:08:10Z"
                }
            ],
            "name": "default",
            "resourceVersion": "82315",
            "uid": "e4bc011d-1a8c-4ae1-aff9-dfc522f737f9"
        },
        "spec": {
            "cidrs": [
                {
                    "cidr": "10.86.101.238/31"
                }
            ],
            "disabled": false
        }
    }
}

PLAY RECAP **************************************************************************************************************************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
metal0                     : ok=16   changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0
metal3                     : ok=10   changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0

make -C bootstrap
kubectl create namespace argocd --dry-run=client --output=yaml \
        | kubectl apply -f -
namespace/argocd created
cd argocd && ./apply.sh
Error from server (NotFound): ingresses.networking.k8s.io "argocd-server" not found
error: error validating "STDIN": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applications.argoproj.io" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applicationsets.argoproj.io" not found
make[1]: *** [argocd] Error 1
make: *** [bootstrap] Error 2
khuedoan commented 4 months ago

Friendly reminder to use Markdown formatting syntax, since it's very hard to read without it. I have updated the issue to make it more readable.

danghung-dev commented 4 months ago

Thank you , I will use markdown next time. I have already fixed above issue by just run "make" again.

Now I have another issue , I have scan all the codes but I cannot found where gitea ingress was created

image

There is also an error when apply.sh in the root cd root && ./apply.sh Error from server (NotFound): namespaces "gitea" not found

image
romelBen commented 4 months ago

I am also facing the same error of Error from server (NotFound): namespaces "gitea" not found

pandabear41 commented 4 months ago

I am also facing the same error of Error from server (NotFound): namespaces "gitea" not found

The error is not a problem. It is kubectl checking if gitea ingress or namespace exists. If it doesn't exist then it will apply with values-seed.yaml instead of values.yaml. If you run the configure script and setup the seed repo then argocd will do it's first few application deployments using your github.com repo instead of the local gitea instance.

khuedoan commented 4 months ago

Yup @pandabear41 is right, it just checks for the existence of Gitea and uses the seed repo instead.

The errors you see the first time look similar to https://github.com/khuedoan/homelab/issues/102:

Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applications.argoproj.io" not found
Error from server (NotFound): customresourcedefinitions.apiextensions.k8s.io "applicationsets.argoproj.io" not found

Let's continue the conversation there and mark this one as a duplicate.