Closed HumairAK closed 3 years ago
With this ADR, and assuming we adopt my suggestion for core resources, we end up with something like:
.
├── apiextensions.k8s.io
│ └── customresourcedefinitions
│ ├── applications.argoproj.io.yaml
│ ├── appprojects.argoproj.io.yaml
│ ├── kfdefs.kfdef.apps.kubeflow.org.yaml
│ ├── observatoria.core.observatorium.io.yaml
│ ├── prowjobs.prow.k8s.io.yaml
│ └── rayclusters.cluster.ray.io.yaml
├── config.openshift.io
│ ├── oauths
│ │ └── cluster.yaml
│ └── projects
│ └── cluster.yaml
├── core
│ ├── namespaces
│ │ ├── apicurio-apicurio-registry.yaml
│ │ ├── argocd.yaml
│ │ ├── as-pushgateway.yaml
│ │ ├── b4mad-minecraft.yaml
│ │ ├── cnv-testing.yaml
│ │ ├── codait-advo.yaml
│ │ ├── democratic-csi.yaml
│ │ ├── ds-black-flake.yaml
│ │ ├── ds-example-project.yaml
│ │ ├── ds-ml-workflows-ws.yaml
│ │ ├── fde-audio-decoder-demo.yaml
│ │ ├── hostpath-provisioner.yaml
│ │ ├── kubeflow.yaml
[...]
Some random thoughts on this topic from over the weekend; writing them here so as not to forget:
kustomization.yaml
for several reasons:
kustomize
won't include files outside the local directory treekustomization.yaml
to set the namespace for resourceskustomization.yaml
to include components
Some Namespaces
include custom resource quotas (e.g.,
base/opf-datacatalog/resourcequota.yaml
). Currently, these are all
named "custom". If we reorganize everything in base/
by
<apiversion>/<kind>/<name>
, all these resource quotas end up in the
same place. With this adr, should we:
Leave things as they are, with the custom resourcequota.yaml
in
base/core/namespaces/<name>/resourcequota.yaml
?
Give the resource quotas unique names and put them in
base/core/resourcequotas/<name>
? I submitted a pr to update the
names in https://github.com/operate-first/apps/pull/621
Move these into overlays, on the assumption that different clusters may want to apply different resource quotas?
Should we treat the image registry as a top-level application?
Currently, we have a PersistentVolumeClaim
in
base/imageregistryconfigs/cluster
, but this seems out of place (it's
really not a cluster-wide resource).
Give the resource quotas unique names and put them in base/core/resourcequotas/
? I submitted a pr to update the names in operate-first/apps#621
I think I like this option over the others
Should we treat the image registry as a top-level application?....(it's really not a cluster-wide resource).
Well..neither are operator-groups, I think cluster-scope
app is not just cluster-resources
but more like privileged resources, and resources in privileged locations
, I personally don't mind having PersistentVolumeClaim
in cluster-scope
for niche cases such as this. An app for 2 files seems overkill imo.
I agree with @HumairAK. The cluster-scope
or cluster-resources
is maybe named wrong - the purpose for it is to host all resources which are not allowed to be touched by regular project admins. OperatorGroups
, Subscriptions
, ResourceQuotas
are all namespaced resources we define here and it's because we don't want users to control those...
The Image registry is a specific case - the Config
resource is a cluster-scoped resource that is deployed on the cluster from the start and we're just patching few properties of it - some other spec properties are even managed by the cluster itself. It's a singleton in the whole cluster. And it in our case, it requires a PVC.
If you want the Image Registry as a separate app, you would still need to split those 2 resources up and let the Config
live in the cluster-scope
and the PVC in a separate application with a single resource only. That seems wrong to me. Hence I chose to include PVC together with it in the cluster-scope
.
Give the resource quotas unique names and put them in base/core/resourcequotas/
? I submitted a pr to update the names in operate-first/apps#621
:+1: on the resource quotas. I think that's the best option we have.
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: larsks, tumido
The full list of commands accepted by this bot can be found here.
The pull request process is described here
Resolve: https://github.com/operate-first/apps/issues/610