Closed jimmykarily closed 3 years ago
OSBAPI is a good idea that never really got a good chance to show off. I think it might be worth revisiting how to revive it.
I agree with @agracey that OSBAPI is a good idea that has not yet been fully realized. People are often confused by service mesh, and OSBAPI is a much simpler model.
If Carrier supports OSBAPI, it can use Minibroker, Metabroker, or any CF-compatible broker.
I'd like to build a terraform broker as well. I have some ideas on how to do it simply once we get there
@agracey take note of https://github.com/cloudfoundry-incubator/cloud-service-broker which uses "Brokerpaks" to deploy services via Terraform.
A comparison with Operators: https://thenewstack.io/kubernetes-operators-and-the-open-service-broker-api-a-perfect-marriage/
SUSE has fork: Sources
Based on OSBAPI.
Default installation requires Google's
Service Catalog (shorthand: SVC
).
Attention Refer to above for installation of SVC charts, and cli. Instructions in the minibroker README are out of date, i.e. broken, i.e. refer to the wrong location for the SVC helm repository.
Alternative install without SVC is done using the override
--set 'deployServiceCatalog=false'
. Use this for CF and similar,
i.e. when the PaaS is talking directly to brokers to manage them
and services.
Can work with and without Google's Service Catalog.
Ok/Fail of actually provisioning a service very much dependent on the service chart behind the chosen plan.
The thing here is, if something fails, and carrier is deeply involved, then carrier would be the first to be blamed. Possibly even if carrier is able to report which broker it talked to and the failure reported by the broker.
At least minibroker is not telling anything beyond It failed
.
Other OSBAPI based brokers may be more chatty.
Generic tooling exists for talking to all brokers exposing an OSBAPI interface.
See eden, and SVC.
SVC requires brokers to be written to it (Brokers have to announce themselves with a new kind of resource to be known).
SVC is a kind of super-broker. Note however that SVC itself
does not expose an OSBAPI (afaict at the moment). It expects
requests to made through the creation of resources understood by
its CRD/operator, i.e: ServiceInstances
and ServiceBindings
.
I.e. cloud-native
/ kubernetes-native
.
The svcat
cli simply does this, and reports on changes to these
resources.
helm repo add svc-cat https://kubernetes-sigs.github.io/service-catalog
helm install catalog svc-cat/catalog --namespace catalog --create-namespace
helm install minibroker --namespace minibroker suse/minibroker --create-namespace
svcat get classes
svcat describe class mysql
svcat provision amysqldatabase \
--class mysql \
--plan 5-7-14 -p mysqlDatabase=mydb -p mysqlUser=admin
The last command should provision a mysql database. This fails.
svcat describe instance amysqldatabase
Name: amysqldatabase
Namespace: default
Status: OrphanMitigation - Provision call failed: service instance "96f100a4-63c7-4a3b-8afc-eb5b7c597382" failed to provision @ 2021-02-16 09:51:49 +0000 UTC
Class: mysql
Plan: mysql-5-7-14
Ignore the SVC. Use a kubectl port forward to expose the broker to the host.
helm install minibroker --namespace minibroker suse/minibroker --create-namespace \
--set "defaultNamespace=minibroker"
kubectl port-forward -n minibroker pod/minibroker-minibroker-58f6bb95bc-qrrrw 9999:8080
Note, to use minibroker directly it has to be told in which namespace to deploy services into. When using from SVC this is not required.
Use Stark & Wayne's eden cli to access and use the broker.
kubectl port-forward -n minibroker pod/minibroker-minibroker-58f6bb95bc-qrrrw 9999:8080
In a different terminal
go get -u github.com/starkandwayne/eden
export SB_BROKER_URL=http://localhost:9999
eden cat
eden cat|grep -i mysql
cat data.json
{"mysqlDatabase":"mydb","mysqlUser":"admin"}
eden p -s mysql -p 5-7-14 -P=@data.json
provision: mysql/5-7-14 - name: mysql-5-7-14-4ab14a5f-f2d3-4d19-a6cd-912fbe010a8d
provision: in-progress
provision: failed - service instance "4ab14a5f-f2d3-4d19-a6cd-912fbe010a8d" failed to provision
provision: done
Same essential error as with SVC.
Switched to mysql plan 5-7-28
. Otherwise the same commands.
Service is provisioned successfully both times.
svcat describe instance foo1
Name: foo1
Namespace: default
Status: Ready - The instance was provisioned successfully @ 2021-02-16 10:21:27 +0000 UTC
Class: mysql
Plan: mysql-5-7-28
eden p -s mysql -p 5-7-28 -i foo2 -P=@data.json
provision: mysql/5-7-28 - name: foo2
provision: in-progress
provision: in progress - provisioning service instance "fd10a52b-f9ac-498f-ae1b-1b3697f14fc5"
[...]
provision: succeeded - service instance "fd10a52b-f9ac-498f-ae1b-1b3697f14fc5" provisioned
provision: done
Able to bind, and extract bind information (i.e. access location and credentials).
Note, each service had to be bound through the tool it was created
with, i.e. svcat
, vs eden
. While you can bind eden's foo2
with
svcat
, you cannot get bind information out of it.
svcat bind foo1
svcat describe binding foo1 --show-secrets
eden bind -i foo2
eden credentials -i foo2 -b <id-returned-by-bind-above>
It might be possible to not extend carrier
with service
management at all. IOW let the user use svcat
, eden
, or
similar.
The only point of contact would be a means of transfering service credentials, i.e. service binding data into an application managed by carrier.
This has the advantage of keeping carrier simple and lean.
This has the disadvantage of pushing the majority of responsibility for services on to the user.
In counter, the user has the most flexibility in setting up the services they need/want. Carrier is also not bound to any kind of framework.
A second possibility, extend carrier with service management
commands, but have them delegate to svcat
, eden
, or similar.
This has the advantage of keeping carrier the main point of contact for users.
As disadvantage, carrier would be bound to the chosen tool,
whether it be svcat
, eden
, or other.
Somewhere in between the two above, carrier is extended with basic management for OSB API brokers and then talks directly to the brokers made known to it, without an interposed external cli.
While carrier is bound to OSBAPI, there are many brokers supporting that interface.
How exactly svcat
and minibroker
talk to each other ?
How does svcat
find the minibroker
, even ?
Some kind of operater looks to be involved.
And minibroker likely posts a resource for that operator, describing itself, and thus making it known/available.
That makes it similar and different to eden
.
Eden is directly and explicitly pointed at a specific broker, by its user, via cli arguments, and/or environment variables.
SVC on the other hand expects brokers to announce themselves through. IOW a broker has to be written to work with SVC.
Check from dumping the cluster which has SVC and minibroker installed:
grep 'servicecatalog.k8s.io\|NAME' r-cat/list-all | editor
NAME APIVERSION NAMESPACED KIND
clusterservicebrokers servicecatalog.k8s.io/v1beta1 false ClusterServiceBroker
clusterserviceclasses servicecatalog.k8s.io/v1beta1 false ClusterServiceClass
clusterserviceplans servicecatalog.k8s.io/v1beta1 false ClusterServicePlan
servicebindings servicecatalog.k8s.io/v1beta1 true ServiceBinding
servicebrokers servicecatalog.k8s.io/v1beta1 true ServiceBroker
serviceclasses servicecatalog.k8s.io/v1beta1 true ServiceClass
serviceinstances servicecatalog.k8s.io/v1beta1 true ServiceInstance
serviceplans servicecatalog.k8s.io/v1beta1 true ServicePlan
Minibroker announces itself as a ClusterServiceBroker.
It may also announce all the classes and plans it has. Or the SVC operator creates them for the minibroker after querying it.
The resources for provisioned service instances, and for service bindings are certainly managed by the SVC operator. The actual credentials/information for bindings sit in associated secrets.
From my dump, for the foo1
service
find r-cat/ | grep foo1 | sort
r-cat/secrets.d/default---foo1
r-cat/servicebindings.servicecatalog.k8s.io.d/default---foo1
r-cat/serviceinstances.servicecatalog.k8s.io.d/default---foo1
Backers/Contributors:
Carrier has to track service brokers, services, plans, and instances, i.e.:
The carrier team has to code and maintain the relevant commands, i.e. (un)register brokers, list services, plans, instances, create/destroy/(un)bind service instances, list instances/bindings per app, list apps/bindings per instance, etc.
Access to a (likely) large set of brokers exposing their functionality through this API.
Users (PaaS managers, app deployers) manage services through carrier as the central point
Shorthand: SVC
Meta Broker and Manager
Frontend through k8s resources. IOW a set of custom CRDs plus associated operator managing them.
Backend (to brokers) is OSB API.
Participating brokers have to be OSB API compliant, and either
Is a Kubernetes Special Interest Group (SIG) and an official incubator project.
Backers/Contributors
Documentation states it as in beta
The SVC keeps track of brokers, services, plans, instances.
Compared to OSB API carrier technically has to track only service bindings, and requires only commands to list instances, (un)bind instances, list instances/bindings per app, list apps/bindings per instance, etc.
Management of services (classes), plans, instances is outside of
carrier, through SVC tools (f.ex svcat
).
Compared to OSB API a lesser amount of code to devleop and maintain by the carrier team.
Same set of brokers available as for OSB API, although those not written towards use with SVC, i.e. those not announcing themselves to SVC require manual work (posting of the necessary (C)SB resource).
Users (PaaS managers, app deployers) manage services through SVC tooling in the main, and through carrier for the binding to applications.
IOW two tools, at least.
It is possible to integrate deeper by extending carrier with the code and commands to list services, plans, instances, create/destroy instances.
At that point the SVC tools become superfluous for most work and carrier is again the central point for everything services, except the brokers themselves. These still have to be announced to SVC instead of carrier, be it automatic on their deployment, or manually.
Another deeper integration here would be to make SVC a component of carrier, installed and removed as part of it.
Although I see it more in the same category as traefik
,
i.e. something carrier will deploy only if the cluster does not
contain such already.
This is just a broker. Not a framework and/or API specification like OSB API or Service Catalog.
I base this classification on
This [...] service broker [...] adheres to the Open Service Broker API v2.13.
from the website linked above, and reading further down to the set of services the project provides access to. That all of these are specific to Google and GCP does not really matter here. It only means that this broker is especially convenient for users of that platform.
For carrier the important point is that users will be able to use that broker regardless of if we choose to support OSB API directly or indirectly (through Service Catalog).
Pinged @f0rmiga for metabroker information (SUSE/metabroker/issues/30) as I found the existing docs parse.
OSBAPI is a good choice to go as it's well defined and has some implementations already. Metabroker has its own API via CRDs, but as I already did a demo for some folks, it can implement a shim that does the OSBAPI -> Metabroker CRDs conversion. I'll prepare some docs with a nice diagram to make it simple to understand (it'll resolve https://github.com/SUSE/metabroker/issues/30).
Carrier could speak Metabroker CRDs as well directly, which would reduce the number of layers of abstraction.
Ah, so metabroker is similar to Service Catalog (SVC) in that its API are CRD + resources, and meta is an operator. It differs in the how its knows what services it can deploy. That is also specified through some kube resources AFAICT, and they specify services (and the plans of each). SVC otoh looks for resources describing brokers, and then does regular OSBAPI calls to these for all the actions.
I am disinclined to use metabroker CRD as the API in carrier ...
Might I suggest an alternative way to approach this.
Rancher tends to be Kubernetes first and likes upstream projects. What about a catalog of services that works well with bare bones Kubernetes as well as with Carrier. All in a Kubernetes native way.
What came to mind was https://crossplane.io. This is a CNCF sandbox project that can work with varying providers (e.g., AWS, Azure, on-prem/hosted, etc).
@mattfarina does this provide an interface for provisioning persistent data services for apps like OSBAPI does? Does it have wider adoption than OSBAPI?
@troytop in their words: https://crossplane.io/docs/v0.2/related-projects.html (not that I fully understand what it means yet).
The feeling I get from https://crossplane.io/docs/v0.2/related-projects.html is that they are looking to be a kind of cross-platform IaaS ...
Maybe similar to terraform ? Except maybe limited to the ku8s eco system ? ... Hm their words about federation-v2 then contradict this idea somewhat with mentioning non-container workloads and managed services. So maybe more like terraform, in a kube way ? (I.e. control via kube CRD, i.e. custom, kube centric whereas terraform config is wholly custom, and external/outside of the things you are working with)
Oh, and they mention it ... I should really read it completely first before starting to write the thoughts popping up as I read ... Ok, they see terraform as in the same space, different way of trying to solve the same problems, themselves more as a super set with regard to provided functionality.
An important thing for them looks to be workload portability
, that we should follow up on to see what they mean by that.
Overall I am still coming away with crossplane being an IaaS at the core, for multi-cloud/cluster, and reaching up into the PaaS level (entry page - unify app and infra config and deployment).
Where it comes to services they seem to consider OSBAPI / Service Catalog as a (very) limited subset of what they want to be ... I.e. they wish to cover apps also.
A broker, which uses OSBAPI.
Blog. Seems to be cabable of being used with Service Catalog as well.
Installation is done through a Helm chart.
The chart is configured with information about the Azure subscription to be used for requests, i.e
As an OSBAPI compliant broker we can use it either directly, or through service catalog.
@troytop It does provide an interface by using custom resources. It's CRD and controller based. There is no need to rely on anything from the vendors, either. For example, controllers run in the cluster to work with AWS RDS APIs directly.
When it comes to adoption, I'm not entirely sure about OSBAPI adoption compared to something like crossplane. How widely used is OSBAPI? How used is it outside of cloud foundry? I know that OpenShift has moved to more of a controller/CRD based setup in recent years.
Not sure if what is shown on the front of https://www.openservicebrokerapi.org/compliant-service-brokers is all they know. The various public cloud vendors (AWS, GCP, Azure) all seem to offer an OSBAPI based broker for access to their services.
The Service Catalog
looks to be all about having a controller/CRD interface to OSBAPI brokers.
The thing with the controller/CRD interfaces seems to be that there are at the moment quite a lot, all different. Service Catalog might be a head above others as being handled by a kube SIG group OSB API looks to be relatively standardized.
That was the original goal of Metabroker, to unify all those non-standardized CRDs under the same framework, and under the same API - OSBAPI.
On Wed, Feb 17, 2021 at 9:07 AM Andreas Kupries notifications@github.com wrote:
The thing with the controller/CRD interfaces seems to be that there are at the moment quite a lot, all different. Service Catalog might be a head above others as being handled by a kube SIG group OSB API looks to be relatively standardized.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/SUSE/carrier/issues/90#issuecomment-780619635, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAYAZ6IIXMCZFQJIKYXAWGDS7PLTZANCNFSM4WWXX5PQ .
It would also be very helpful if @satadruroy provided some input since I know he has some context with Crossplane and he helped me shape Metabroker.
@mattfarina we want to use OSBAPI so that we create ONE service interface in Carrier, rather than one for each provider. The broker can provide interfaces to the various CRDs.
I'd suggest also thinking through how to expose services to the applications. VCAP_SERVICES is clunky and more difficult to parse than it should be. I wouldn't be surprised if that was a different spike story.
This might be made more complicated by an assumption that Carrier is used in the dev environment but not in prod. Whatever environment/configmaps/etc chosen by us need to be repeatable without the platform to help.
@troytop crossplane makes it one consistent API, I believe. From the docs... to request a mysql database...
apiVersion: storage.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: demo
namespace: default
spec:
classReference:
name: standard-mysql
namespace: crossplane-system
engineVersion: "5.7"
crossplane handles dealing with the provider. To the requestor the API is the same.
@mattfarina beat me to it. But I also see crossplane has evolved their thinking on the notion of "one interface". In versions leading up to 0.12 they used to support a "provider-agnostic" abstraction, e.g. a portable MySQLInstance
as pointed out by @mattfarina
https://crossplane.io/docs/v0.3/services-guide.html#overview
But you won't see that anymore in APIs v0.13 and above, e.g. compare:
https://doc.crds.dev/github.com/crossplane/crossplane@v0.13.0 (no MySQLInstance
)
vs
https://doc.crds.dev/github.com/crossplane/crossplane@v0.12.0
From the deprecation note in v0.12
v0.12:
#1600 deprecated claims in Crossplane. Claims are being phased out of Crossplane in favor of composition, a more flexible abstraction layer that allows users to define their own infrastructure and publish it to their organization. A detailed overview of composition can be found in the Crossplane documentation.
IMHO, the value of 'one interface' in such diverse backend implementations is questionable and leads to leaky abstractions and OSBAPI perhaps suffers from the same flaws.
FWIW, I find that crossplane composition construct bit too complex. I would probably prefer to use just the backend specific CRDs from aws/azure/gcp operators. But interestingly enough, even the provider operator authors think crossplane adds value:
Anyway, I'd leave it up to the experts @mattfarina and @jimmykarily for the decision
Google is also working on this: https://cloud.google.com/config-connector/docs/overview (maybe replacing their service broker? That's what someone implies here: https://www.reddit.com/r/kubernetes/comments/j8671q/whats_the_deal_with_the_kubernetes_service/)
The 301 of this URL says something: https://cloud.google.com/kubernetes-engine/docs/concepts/google-cloud-platform-service-broker
Update: It's also stated at the top of this README: https://github.com/googlearchive/k8s-service-catalog
Here is what I think after reading various links and tracking progress or the various projects. App developers and PaaS implementers (like us) mostly like a unified interface (like the one provided by service catalog). Cloud providers don't seem to like it much. Maybe because by unifying the interface, their offerings seem less different. All those little options they are adding to their provisioning methods, are features of their products, they probably consider those as added value. So from a philosophical perspective, that's my explanation on why Google has dropped support for Service Catalog and why Service Catalog is not getting the adoption we all expected.
No matter what the actual reason is, if cloud providers are not adopting osbapi et al, then we don't benefit much from using it. On the other hand, if someone (e.g. crossplane) has taken on the challenge of abstracting this scattered world of Services for us, we could delegate to them and reap the fruits of their labor.
To summarize my thoughts, I prefer the concept of the Service Catalog but if it's loosing momentum, we may offer a better experience to our users if we delegate to a tool that embraced the world of operators successfully.
Regarding Crossplane, it seems to be doing a lot more than abstracting the Service provisioning and management. For example, it implemented an "application abstraction" (KubernetesApplication
) and ways to deploy full apps with their dependencies: https://github.com/crossplane/app-wordpress . That's makes it more of a competitor than a component.
So what do we do? We don't need to write anything in stone obviously but if we verify that there is enough support for the service catalog by at least 2 big cloud providers I would like us to use that first. Let's not forget how much easier service catalog makes local development too by using metabroker/minibroker solutions. I don't see such an option in crossplane (if you found such a thing please correct me).
Update: It's also stated at the top of this README: https://github.com/googlearchive/k8s-service-catalog
This thing seems to be a cli combining management of both Service Catalog and the GCP service broker.
The service catalog itself is at https://github.com/kubernetes-sigs/service-catalog
Agreed on the differing perspectives of the Cloud Providers, and us/users (app devs). The CP's are trying to differentiate themselves, and as part of that pursue vendor lockin. Users generally do not want to be locked.
The service broker itself has no deprecation warnings: https://github.com/GoogleCloudPlatform/gcp-service-broker not sure what the k8s-service-catalog thing is but it seems to refer to the deprecation of the service broker (?!). Anyway, maybe they are simply trying to push people towards using their "Config connector" instead of the broker while keeping the broker in working state. I don't know.
Azure:
This project is no longer being maintained by Microsoft. Please refer to https://github.com/Azure/azure-service-operator for an alternate implementation.
from here: https://github.com/Azure/open-service-broker-azure
Makes you regret you ever blamed standards: https://xkcd.com/927/ :D
Regarding @satadruroy 's comment, yes the yamls you have to create for crossplane to create services are provider specific (see here: https://crossplane.io/docs/v1.0/getting-started/provision-infrastructure.html). Which brings the next question, how harder is it then to support the specific cloud provider operator's CRD's directly?
Suggestion:
Let's decide what we want it to look like from the Carrier's user perspective. E.g.
$ carrier enable-provider google
$ carrier list-services --provider=google
service | version
cloudsqlpostgresql | 1.2.3
...
$ carrier enable-provider local # (I'm looking at you minibroker)
$ carrier list-services --provider=local
service | version
MariaDB | 5.6.7
Each "provider" could be implemented as a "plugin" (semi-independent binary) and should conform to the same conventions (or provide the same API if you prefer). But each could be implemented differently. For example, we install the service catalog and minibroker for the "local" provider but we install the Google Config Connector for "google".
May sound like a lot of work but we don't need to do it all at once. Even if it proves to be the wrong thing to do, we can still maintain the same interface and exchange each backend for crossplane, service catalog or other.
This way we can simply start with "local" provider and work on the UX based on that. Public providers follow.
Hm ... My thoughts (for the OSBAPI) choice had been running more towards CF cli command set, i.e.
In the above approach osbapi
could be a plugin/provider instead ... The plugin would then deal with (de)register brokers ... Or, if a plugin can be enabled multiple times, take arguments at enablement, and given a name, then:
$ carrier enable-provider osbapi --name foo <foo-uri> <foo-credentials>?
$ carrier list-services --provider=foo
Should carrier come with a predefined set of plugins/providers ? Shall we allow customers/users/other developers to register a provider of their own ?
This goes a bit into there are the plugin provider binaries stored ? Found ? (Stash anything predefined in the carrier binary ?)
Note also
$ carrier list-providers
This would actually the list of providers you can use with enable-provider
, i.e. for which carrier has found plugins (in some way).
Would also need something to list the enabled, i.e. active providers.
That said ... what is enable-provider
actually doing ?
Why do we have to enable a provider before using it to create services.
Either we have the plugin or not. If we have it, can not simply use the plugin based on the --provider
value of a carrier command ?
There are very good points @andreas-kupries . No matter what solution we choose in the backend there is going to be a set of objects to be created on the cluster to be able to support a specific cloud provider. E.g. in crossplane you have to install the controllers for the specific cloud. In service catalog you have to create a ClusterServiceBroker object etc. That's what I imagined enable-provider
doing. You don't want things installed on your cluster if you don't use them.
But the above what just a pseudo-API, I didn't think much about it. The actual API will be decided at implementation time. Let's do a spike on crossplane and get a feel for it. While doing so we can decide what commands we need to implement.
First take on Services on public cloud providers: #157
Closing this one.
Carrier should have some kind of support for services (databases, messages buses etc). Let's collect ideas on this issue before we come up with concrete tasks.
Links
Goal of the spike