Closed miguelsorianod closed 4 years ago
We write here some isolated scenarios that we might have for the AMP installation CRD (not the Apicast standalone one). In this sample scenarios, each one is independent of the others so the structure might be incompatible between them. This should help us to know what available possibilities are to organize CRDs
Scenario where a standard AMP deploy is desired, without any desire for configurability
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
status:
The kind would be AMP, and apiVersion would be amp.3scale.net/v1alpha1
.
Having an "empty" AMP CRD would deploy a default AMP version of the product and would deploy a "standard" AMP scenario. This is, what's currently defined
in the AMP template
A standard scenario basically deploys the following subsystems:
The resources that are currently deployed in the standard template are:
configmap/apicast-environment
configmap/backend-environment
configmap/mysql-extra-conf
configmap/mysql-main-conf
configmap/redis-config
configmap/smtp
configmap/system
configmap/system-environment
deploymentconfig.apps.openshift.io/apicast-production
deploymentconfig.apps.openshift.io/apicast-staging
deploymentconfig.apps.openshift.io/apicast-wildcard-router
deploymentconfig.apps.openshift.io/backend-cron
deploymentconfig.apps.openshift.io/backend-listener
deploymentconfig.apps.openshift.io/backend-redis
deploymentconfig.apps.openshift.io/backend-worker
deploymentconfig.apps.openshift.io/system-app
deploymentconfig.apps.openshift.io/system-memcache
deploymentconfig.apps.openshift.io/system-mysql
deploymentconfig.apps.openshift.io/system-redis
deploymentconfig.apps.openshift.io/system-sidekiq
deploymentconfig.apps.openshift.io/system-sphinx
deploymentconfig.apps.openshift.io/zync
deploymentconfig.apps.openshift.io/zync-database
imagestream.image.openshift.io/amp-apicast
imagestream.image.openshift.io/amp-backend
imagestream.image.openshift.io/amp-system
imagestream.image.openshift.io/amp-wildcard-router
imagestream.image.openshift.io/amp-zync
imagestream.image.openshift.io/postgresql
persistentvolumeclaim/backend-redis-storage
persistentvolumeclaim/mysql-storage
persistentvolumeclaim/system-redis-storage
persistentvolumeclaim/system-storage
route.route.openshift.io/api-apicast-production
route.route.openshift.io/api-apicast-staging
route.route.openshift.io/apicast-wildcard-router
route.route.openshift.io/backend
route.route.openshift.io/system-developer
route.route.openshift.io/system-master
route.route.openshift.io/system-provider-admin
secret/apicast-redis
secret/backend-internal-api
secret/backend-listener
secret/backend-redis
secret/system-app
secret/system-database
secret/system-events-hook
secret/system-master-apicast
secret/system-memcache
secret/system-recaptcha
secret/system-redis
secret/system-seed
secret/zync
service/apicast-production
service/apicast-staging
service/apicast-wildcard-router
service/backend-listener
service/backend-redis
service/system-developer
service/system-master
service/system-memcache
service/system-mysql
service/system-provider
service/system-redis
service/system-sphinx
service/zync
service/zync-database
Secrets would be automatically created and passwords in secrets would be automatically generated too.
The secrets would be automatically added to the spec
section of the deployed CRD (TODO: to be defined where the secrets would appear).
In case some secret was already created before the deploy of the operator then the behaviour would be to create the fields that do not exist in them. The existing secrets would not be recreated nor its existing fields.
ImageStreams would gather images from a default docker registry. In this scenario it is assumed that the images in the docker registry would previously exist before the deploy
The elements to deploy would be all the elements that form a standard AMP deployment ()
For Apicast standalone the kind would be Apicast and apiVersion would be apicast.3scale.net/v1alpha1
. This is, we would separate the products by ApiVersion and not by Kind. A product might have more than one CRD (more than one Kind)
Scenario where a standard AMP deploy is desired, having the ability to change the AMP version
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
version: <version-string>
status:
The version
field would control the AMP release to deploy (maybe better to name it release to not cause confusion??). An AMP release number would NOT have any relationship with the docker image version numbers that would form that release.
The version
field would control:
Changing the version
field would trigger redeploy of components in an ordered way
TODO there are lots of scenarios based on this that shoul be tackled (upgrade, downgrade, upgrade with breaking changes, upgrade without breaking changes, ...)
Having a specific AMP version specified in CRD, for some reason, one or more images wants to be overriden. For example, to test a specific image for a subsystem, development images, etc...
The idea is that if the image field is specified then it would override the images
originally specified in the version
field
Have a centralized map named images
where the Image URLs can be overriden:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
version: <version-string>
images:
system: <image-url-string>
apicast: <image-url-string>
backend: <image-url-string>
memcached: <image-url-string>
postgresql: <image-url-string>
mysql: <image-url-string>
status:
Characteristics:
Having this has the consequence of having a CRD structured based on grouping concepts
in generic maps instead of having subsystem maps (system
, backend
, ... sections)
Have different maps one for each subsystem where the Image URLS can be overriden:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
version: <version-string>
apicast:
image: <image-url-string>
backend:
image: <image-url-string>
memcached:
image: <image-url-string>
mysql/oracle:
image: <image-url-string>
postgresql:
image: <image-url-string>
redis:
image: <image-url-string>
router:
image: <image-url-string>
system:
image: <image-url-string>
zync:
image: <image-url-string>
status:
Characteristics:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
version: <version-string>
apicast:
image: <image-url-string>
backend:
redis-image: <image-url-string>
backend-image: <image-url-string>
memcached:
image: <image-url-string>
router:
image: <image-url-string>
system:
redis-image: <image-url-string>
mysql-image/oracle-image: <image-url-string>
memcached-image: <image-url-string>
zync:
image: <image-url-string>
postgresql-image: <image-url-string>
status:
Characteristics:
Currently the AMP template has ImageStreams, and each ImageStream has two tags:
latest
<version>
When builds from source code are desired what is done is that a BuildConfig is created
for the ImageStream and the BuildConfig is configured to output its results to the tag latest
of the corresponding ImageStream.
The idea is that Operator would not control this. If that is desired we will require the user of the operator to manually configure the BuildConfig.
The problem with this approach is that the Operator logic might encounter unexpected changes on the DeploymentConfig status when the BuildConfigs are created due to this would trigger a redeploy of the DeploymentConfigs.
In some situations like a low-resource environment or local development environment it might be desired to not have any kind of resource requirements.
This is what the 'evaluation' version of the templates does.
Having a field that controls whether to have limits or not for all components:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
spec:
disable-resource-limits: <boolean> # false by default
status:
Characteristics:
Allow each subsystem to control whether to have limits or not:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
version: <version-string>
apicast:
disable-resource-limits: <boolean> # false by default
backend:
disable-resource-limits: <boolean> # false by default
memcached:
disable-resource-limits: <boolean> # false by default
router:
disable-resource-limits: <boolean> # false by default
system:
disable-resource-limits: <boolean> # false by default
zync:
disable-resource-limits: <boolean> # false by default
status:
Characteristics:
It is desired to use S3 as system's shared storage.
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
system-shared-storage: #Only one of the two fields below can be written
pvc:
size:
s3:
secretReference:
status:
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
apicast:
...
system:
shared-storage:
pvc:
size:
s3:
secretReference:
status:
It is desired to have highly-available databases for the critical databases, externally to the OpenShift cluster, because OpenShift currently does not have productized versions of the databases.
Have a section of the CRD of database locations
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
database-locations:
system-redis:
secretReference:
system-mysql:
secretReference:
backend-redis:
secretReference:
status:
This assumes the secrets are previously created by the user and NOT by the operator
Have subsystems sections where on each subsystem database location information can be set
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
apicast:
...
backend:
redis:
secretReference:
...
system:
mysql:
secretReference:
redis:
secretReference:
...
...
status:
This assumes the secrets are previously created by the user and NOT by the operator
Have scenario sections where each scenario has a set of configurable options
apiVersion: amp.3scale.net/v1alpha1
kind: AMP
metadata:
name: ApiManagementPlatform
spec:
scenario:
ha:
database-locations: #another alternative is to have subsystems inside the 'ha' scenario
system:
redis:
backend:
...
status:
Some scenarios might be incompatible between them and can be difficult to the user to know which ones are compatible or incompatible.
This assumes the secrets are previously created by the user and NOT by the operator
The same alternative than in the previous scenario might appear, changing database-locations by replicas
After looking a few examples it seems that there are the following ways to organize the CRD:
There are the following tradeoffs depending on how are organized:
TODO In this text we have not analyzed the side effects of changing the values on each of the scenarios
The installation part of the 3scale-operator is meant to deploy a functional AMP platform.
The operator should also be able to "maintain" the platform configuration respecting the contents defined in the Operator and will provide some configurability options to the users.
The initial idea I have is to have a single Controller that will manage the AMP platform. Doing this has the consequence that all changes to the AMP platform will go inside the same reconciliation loop.
In case the installation operator also should be able to deploy apicast standalone I would create another different CRD.
CRDs
ThreeScale: Represents a 3scale AMP deployment ApicastStandalone (in case we want operator to manage this): Represents a 3scale Apicast Standalone deployment
Each CRD will deploy all the elements that form it.
A "standard" ThreeScale AMP deployment will deploy what's currently defined in the AMP template
Requirements
Possible desired functionalities
Possible AMP CRD representations
Here are some rough ideas on what an AMP CRD might look:
Specifying the scenario names as a key of the 'spec' of the CRD. Each scenario will have its own options. There will also be "general" options:
Another possible scenario is defining keys for each "subsystem" on the AMP CRD:
By looking at the previous ways of organizing the CRDs I see several levels of configurability that can exist (some of them might not exist depending on how we decide to organize the CRDs):
Current open questions
First steps