Currently, even if service fluentd_daemon is disabled, it's trying to create Configmap for it.
As the resources are created in the kube-system namespace, this results in resource conflict, when we want to have more than one orc8r installed on the same k8s cluster.
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "orc8r-fluentd-es-configs" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "orc8r": current value is "orc8r-stage" deploying "orc8r": install: exit status 1
Currently, even if service fluentd_daemon is disabled, it's trying to create Configmap for it. As the resources are created in the kube-system namespace, this results in resource conflict, when we want to have more than one orc8r installed on the same k8s cluster.
Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ConfigMap "orc8r-fluentd-es-configs" in namespace "kube-system" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "orc8r": current value is "orc8r-stage" deploying "orc8r": install: exit status 1