Closed scubbo closed 12 months ago
Generally, the control plane should not be self-hosting workloads. It might make more sense to remove the host cluster as a target from the application template. Alternatively, we could have multiple catalog entries. One for backstage to reference its host cluster, and one for that cluster as a workload target. There's nothing to prevent these being the same thing.
This repository is moving to https://github.com/back-stack/showcase
I have copied the issue over there: https://github.com/back-stack/showcase/issues/1
In
backstage/catalog/resources/hostcluster.yaml
, the Host cluster is namedhostcluster
. However, when it's registered with Argo, it is namedin-cluster
. This leads to a discrepancy when creating an application to run on the cluster (i.e. without creating "spoke" clusters), since line 65 ofbackstage/catalog/templates/application/template.yaml
extracts thename
of the Catalog-resource cluster as thecluster_id
- resulting in an Argo Application trying to deploy tohostcluster
when Argo only knows about a cluster namedin-cluster
.I'm not sure how to resolve this. I was able to make a successful application deployment by manually changing Argo's view of the name of the cluster to
hostcluster
, to match the name in the Backstage catalog, but I'd rather find a way to do so from GitOps configuration rather than manual action. Some ideas:hostcluster.yaml
toin-cluster
(so that it would match what Argo calls the "self" cluster), Backstage became inaccessible - I got502 Bad Gateway
when accessing https://backstage-7f000001.nip.io, despite the pods, service, ingress etc. all looking healthy.backstage/catalog/templates/application/template.yaml
file could be updated with some (pseudo-code)if (.name == 'hostcluster') then {'in-cluster'} else {.name}
- but, as my pseudo-code probably makes clear, I have no idea how to accomplish that.