Closed ccremer closed 2 years ago
What I would like to see is that all AppCat service APIs have the same structure, so that you "feel home" when working with one or the other service. There are parts of a service which are the same between all services, the "managed service" part.
I really like how Crossplane does this, see example in https://crossplane.io/docs/v1.7/concepts/managed-resources.html:
[...]
spec:
forProvider:
region: us-east-1
dbInstanceClass: db.t2.small
masterUsername: masteruser
allocatedStorage: 20
engine: postgres
engineVersion: "12"
skipFinalSnapshotBeforeDeletion: true
writeConnectionSecretToRef:
namespace: crossplane-system
name: aws-rdspostgresql-conn
Translating that to AppCat could look like this:
apiVersion: postgresql.appcat.vshn.io/v1alpha1
kind: PostgresqlStandalone
metadata:
name: standalone
namespace: myNs
spec:
backup:
enabled: false
monitoring:
sla: BestEffort
resources:
memoryLimit: 2Gi
size: 8Gi
updatePolicy:
maintenanceWindow:
start: {}
version:
major: ""
forService:
enableSuperUser: true
majorVersion: v14
resources: {}
writeConnectionSecretToRef:
namespace: myNs
name: postgres-connection
What is PostgresqlStandaloneConfig
meant for?
I agree with @tobru.
Also, from an implementation point of view, those common parts can be put into their own module and used as a library. That way, we can ensure that every service has exactly the same set of common fields.
What is PostgresqlStandaloneConfig meant for?
I believe this is the CRD that configures the operator itself. But the naming doesn't make it completely clear. Maybe PostgresqlStandaloneOperatorConfig
would be less confusing.
regarding PostgresqlStandaloneConfig
:
My idea is that the operator's configuration is defined somehow. This config is maintained by VSHNeers, most probably through Commodore and I expect it to be different for each cloud provider.
Strictly speaking, this configuration could be also stored in a simple ConfigMap
with a single YAML file that's being read upon startup or each time when reconciling an instance (to allow changes at runtime at the cost of a little bit slower performance).
This config should be able to handle the following settings:
....and then double this config for each major PG version. For example, a new major chart version might have different values schema and this is why I opted for a config that supports this case and not configure these things via CLI flags or env vars.
Do you have preferences whether these settings should be in a CRD or in a ConfigMap
?
kubectl explain
and offers some basic scheme validationI'd go for a CRD for the configuration. IMHO, the pros of the CRD outweigh the cons.
....and then double this config for each major PG version. For example, a new major chart version might have different values schema and this is why I opted for a config that supports this case and not configure these things via CLI flags or env vars.
Instead of duplicating it, I'd go for a map (although they are rather rare for CRDs):
kind: PostgresqlStandaloneConfig
metadata:
creationTimestamp: null
name: platform-config
spec:
resourceMaxima:
memoryLimit: 6Gi
storageCapacity: 500Gi
resourceMinima:
memoryLimit: 512Mi
storageCapacity: 5Gi
versionedConfig:
v14:
chartVersion: y.x.z
helmValues:
helmValue1: foo
helmValue2: bar
v15:
chartVersion: a.b.c
helmValues:
helmValue1: bar
helmValue2: foo
status: {}
If a map is too much of hassle, we could also just use a list:
versionedConfig:
- version: v14
chartVersion: y.x.z
helmValues:
helmValue1: foo
helmValue2: bar
- version: v15
chartVersion: a.b.c
helmValues:
helmValue1: bar
helmValue2: foo
Of course, the type of VersionedConfig
can and will look very different for other services. But the skelleton could be re-used.
(although they are rather rare for CRDs):
And there's a reason for it. As per K8s API conventions, maps are discouraged and lists should be preferred: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/api-conventions.md#lists-of-named-subobjects-preferred-over-maps
Instead of duplicating it, I'd go for a map
I had this idea also in mind at first. But then I figured that this little duplication for the resource contraints is acceptable so that we have an easier scheme instead. I think it's easier to separate them.
I wouldn't think about reusability for other services just yet. We don't know what other settings other services require, also we don't even know if they're the same for Clustered instances for example.
I would also choose to go with a CRD instead of a ConfigMap, it's just more sophisticated. Also making Helm Values available per version is really a great idea to avoid breaking changes and big hassle in the future. If a list is favoured by K8s API conventions then we should go this route. Also I like the idea of @tobru in regards to the main CR definition. We just need to make sure we add everything necessary to the root of spec object. For instance we might add billing as well.
@ZuGao
For instance we might add billing as well.
I'm not quite sure what you mean by that, could you elaborate? Billing configuration should not be user facing, IMHO. Although it would be great for them to simply turn things off, but not so great for us. :)
I thought about the type of billing service that the user has chosen but yes, it was probably a wrong example from my side. Also @ccremer I haven't seen the connection between PostgresqlStandalone and PostgresqlStandaloneConfig. Do we want a bidirectional reference or just unidirectional from PostgresqlStandalone ?
I thought about the type of billing service that the user has chosen
There's nothing to choose from the user's perspective. The billing system is activated as soon as a namespace has a certain label and that will be completely managed by the operator. All the operator has to do is to copy the label from the PostgresqlStandalone
's namespace to the service's namespace, that's it.
Do we want a bidirectional reference or just unidirectional from PostgresqlStandalone ?
Neither. The user can't and shouldn't choose which config to use.
Implementation-wise I thought that either we tell the operator which config to use per major version via CLI explicitly (e.g. --v14-config=platform-config-v14
or we use autodiscovery (list all PostgresqlStandaloneConfig
in the operator namespace) and select the first config which matches the same major version as requested by the user. ~The latter requires that the config has another major version field that sets the scope.~ Probably a Label is more efficient since that allows us to list with a label selector.
@ccremer Ah I see now, this way we don't need any lists or maps within the CRD. I'd also use a label for that.
But we could have the operator set the label from a major version field. But I guess that's an implementation detail.
Isn't the Config CR unique per customer? we will be saving helm values which should be different per each customer/instance.
Isn't the Config CR unique per customer?
No, the config CR is uniquer per running operator (aka cluster). of course it's possible to run different config for different clusters (or different Syn tenants, but that's outside of this scope here).
we will be saving helm values which should be different per each customer/instance.
Is it possible that we don't have the same understanding of an instance? We have defined an instance here: https://kb.vshn.ch/app-catalog/reference/glossary.html#_general (in this case, an instance is be a single PostgreSQL pod) Every instance in a cluster that share the same version will have the same set of Helm values applied.
For example: I order a service with 5G of storage and 2G of memory. The operator deploys a Helm release with the following values:
helmReleaseTemplate
in PostgresqlStandaloneConfig
helmReleases
in PostgresqlStandaloneConfig
if there is one defined that overrides helmReleaseTemplate
for a specific chart versionresources
in my PostgresqlStandalone
spec (e.g. the 5G for disk and 2G for memory).
Summary
PostgresqlStandalone
instancesPostgresqlStandaloneOperatorConfig
(which act as a platform configuration)Checklist
bug
,enhancement
,documentation
,change
,breaking
,dependency
as they show up in the changelog