Closed arthurdm closed 4 years ago
This looks good - some initial thoughts:
host
, port
and contextRoot
separately as well as a combined url
.provides: metadata.name
needed, or can be infer that from the project name (do we have name collisions, or scope to provide more than one endpoint we need to cater for?)thanks @seabaylea
I thought about this when driving home today. :) I think at a minimum we need a path
field, for the context root. I have added that now. We can go through a few examples and see if we need to split it up.
and 3. Originally I had provides: true
and just always use metadata.name
. That would mean that you always know the name of the secret, since it's the same as the service name. But then you lose the flexibility of naming that secret arbitrarily. I am leaning to go back to provides: true
, specially if we need a way to always link to the secret.
4.I think that's a complex scenario that I am not sure we want to support right now, as it would require the runtime code (app) itself to be able to handle lazy resolution of a needed service. In this current design the pod initialization would eventually fail because it could not find a required k8s secret to bind.
for #4, we avoid any need to provide retry etc in the app, if we can deplay the deployment of the microservice until its dependent app is available. This does however introduce a possible deadlock scenario should there, for whatever reason, be a cyclic dependency.
that's a good point @seabaylea - the operator could indeed hold things up until the secret is available. something for us to consider. Perhaps if a deployment is waiting for the secret to be available we can have its reconcile
status to be waiting for dependency
.
updated the proposal to make the provider's secret always match metadata.name
(which is already unique to the service), and added a section to talk about delayed activation.
added a section on an alternate binding method, via mounting a properties file instead of env vars.
updated the design to include namespaces - allowing consumers to specify services from other namespaces.
added section (at the end) about OpenShift 4.2's topology view
Couple things:
the service provides DNS hostname record not a URL, it can be any protocol, HTTP/HTTPS/TCP or even proprietary URL for DB connections like mongodb://
You can't use/mount secrets/config maps across namespaces AFAIK
Most modern authentication methods like OAuth have per client credentials, like client_id, client_secret etc.
Also the yaml snippets are not syntactically correct, as you need another object key to hold indented values
thanks for the feedback @arturdzm
updated design with latest feedback
I think that the spec of the service provided
is missing a level of information or aggregation.
As a service exposed could be :
Example for an endpoint
service:
provides:
endpoint:
path: "/portfolio"
type: ClusterIP
port: 9080
protocol: https
Example for a DB
service:
provides:
database:
DB_NAME:
DB_USER:
DB_PWD:
... with such info, appsody (or hal - https://github.com/halkyonio/hal) client tool OR UI could filter the information when a user will select a service to bind, ...
WDYT ? @arthurdm
thanks for the feedback @cmoulliard
I think the category of the service is a great idea, but I am thinking that fits much better in the service.consumes
side of the AppsodyApplication
CR, rather than the service.provider
side - i.e. we wouldn't model a DB in the AppsodyApplication
CR, as that is meant to deploy runtime apps.
So focusing on the service.consumes
obj, that's an area where a type / category could be placed:
service:
- consumes: "service1"
type: endpoint
namespace: "lob1"
mountPath: "/opt/endpoint"
- consumes: "service2"
type: database
namespace: "couchdb"
mountPath: "/opt/db"
In this scenario the Appsody Operator (& other tools) can understand that they should look for a k8s / knative service that matches service1
in namespace lob1
- creating the mounted bindings for the consuming service from that.
In the case of a DB, it could either call a service broker to provision that (advanced case), or simply look for a k8s secret that matches that name (simple case) in that namespace - creating the mounted bindings for the consuming service from that.
As a side note: can you give an example of the integration you were thinking for prometheus and jaeger? We have recently added support to connect to a prometheus instance in our app CR, to have it scrape metrics from the runtime pod. Both prometheus and jaeger have operators, so that's what we would use for providing these services.
can you give an example of the integration you were thinking for prometheus and jaeger? We have recently added support to connect to a prometheus instance
We are on the same page as we will also use the ServiceMonitor
CRD resource to request to the prometheus operator to monitor a microservice. BTW, such YAML resource could be created OOTB using our Dekorate Tool -> see: http://dekorate.io/dekorate/#prometheus-annotations
For jaeger: here is the info about what we did : http://dekorate.io/dekorate/#jaeger-annotations
Remark: Our current operator don't yet support to handle a capability of type prometheus or jaeger but this is on our todo list, roadmap
In the case of a DB, it could either call a service broker to provision that (advanced case), or simply look for a k8s secret that matches that name (simple case) in that namespace - creating the mounted bindings for the consuming service from that.
That will be the most tricky part as (Template/Ansible) ServiceBroker are deprecated on ocp4 and should be replaced by operators.
Adopting operators is certainly a great idea but the problem is that currently no spec really exist to allow to expose the needed parameters of an operator in order to delegate to a Capability Operator the process to create a DB capability.
The Openshift DevTools team is proposing to define such informations using the ClusterServiceVersion - StatusDescriptors -> see: https://github.com/redhat-developer/service-binding-operator#introduction but of course that will only work if all the Capability OLM writers follow the spec ;-)
Attaching a small set of slides that illustrates the proposed design. I suggest that in this first iteration of the design we focus on category : openapi
.
As this issue is already a long thread discussion, we should perhaps continue the discussion using a Doc to capture the comments/questions and to have the possibility to review/discuss it like also to add diagrams, ... and finally to create a Tech Spec doc. WDYT ? @arthurdm
that's a good idea @cmoulliard - do you have a suggestion for a collaboration tool? Google docs?
Google Doc is perhaps a good idea if all the participants have a gmail address
I moved the design into a google doc and updated the summary to point to it. Comments are welcome either in here or right at the doc. =)
Great job on the implementation @navidsh! I think we can close this issue now?
Yea, it's done. Closing
There's an opportunity for the Appsody Operator to help with basic service binding.
The design has been moved to the following document, to allow for collaborative comments, etc.
https://docs.google.com/document/d/1riOX0iTnBBJpTKAHcQShYVMlgkaTNKb4m8fY7W1GqMA/edit?usp=sharing