kubernetes-retired / service-catalog

Consume services in Kubernetes using the Open Service Broker API
https://svc-cat.io
Apache License 2.0
1.05k stars 385 forks source link

Create a layer of indirection for service and plan names #805

Closed arschles closed 2 years ago

arschles commented 7 years ago

802 explains that service and plan names can change across broker catalog refreshes. Since we need a stable reference, that issue suggests using the service and plan GUID to reference services and plans in the catalog. In the SIG call on 5/8/2017, many of us agreed that there is no other option.

I agree that using GUIDs to reference services and plans on the broker is the best and most reliable. I do, however, believe that doing so is:

  1. A poor user experience for the person creating new Instances
  2. Not portable across clusters, each of which may provide the same logical service with the same API (i.e. MySQL databases or S3-compatible object storage), but uses brokers that specify different service & plan GUIDs

To solve these two problems, I propose adding a layer of indirection (i.e. the solution to all problems in CS πŸ˜„ ) to map service-catalog service and plan names to the GUIDs.

This layer of indirection could be represented as two new resources: ServiceName and PlanName. These resources would be namespace-less (just like ServiceClasses and Brokers are right now), and look something like the following:

ServiceName

apiVersion: servicecatalog.k8s.io/v1alpha1
kind: ServiceName
metadata:
    name: CoolService
spec:
    serviceGUID: fab569db-2d06-4522-b4f6-4ff06f179e5e

PlanName

apiVersion: servicecatalog.k8s.io/v1alpha1
kind: PlanName
metadata:
    name: CoolPlan
spec:
    serviceName: CoolService # which service this is a plan of
    planGUID: 9220f523-7635-4361-86e9-f4af6a606d4f

Here are some considerations for these resources:

pmorie commented 7 years ago

A poor user experience for the person creating new Instances

Absolutely no disagreement there. My gut says that I would rather solve that problem with porcelain commands (kubectl service-catalog provision <service display name> <plan display name>) than a new type of naming indirection. See #806

Not portable across clusters, each of which may provide the same logical service with the same API (i.e. MySQL databases or S3-compatible object storage), but uses brokers that specify different service & plan GUIDs

I do not dispute that this is a valid problem, but I hesitate to introduce a new type of naming indirection to deal with it. I think of this as being similar to StorageClass and label selectors on PVCs. I know that this is something the you and Gabe have been talking about for a while. Is that your primary interest? If so it might pay to reframe the discussion around that. I have called this problem 'trait-based provisioning' in the past and people seem to have thought that was a decent name for it.

pmorie commented 7 years ago

A couple more thoughts on the 'trait-based provisioning' thing:

  1. This seems to be the actual purpose of 'tags' in the OSB API, see https://github.com/kubernetes-incubator/service-catalog/issues/497 and https://github.com/openservicebrokerapi/servicebroker/blob/master/spec.md#service-objects
  2. Since tags are not really a thing in k8s, I have discussed with @jwforres and a couple others at RH the idea of having a convention for brokers to communicate in service metadata about the k8s labels their services should have -- we could use label selectors for this concern if we did this
  3. I do not actually know how CF uses (1), just for the record
  4. I realized the other day that I have no idea how plans would work with a feature like this - since plans do not themselves have 'tags' in the OSB spec - have you had any thoughts in that area?
arschles commented 7 years ago

I know that this is something the you and Gabe have been talking about for a while. Is that your primary interest?

Good eye πŸ˜„ - I did have that problem in mind. I suggested this particular solution because I believe that it's more directly suited to solve the GUID problem, while still leaving open the possibility to later implement the trait-based system that we've spoken about before.

If "trait based provisioning" via a label-selector and StorageClass-like solution is what the group prefers to solve this specific problem, then I am ok with that as well.

I do, however, want to remove GUIDs from most standard workflows as soon as possible

duglin commented 7 years ago

You said:

If the service or plan GUID changed, then the corresponding GUID on the ServiceName or PlanName resource should be updated accordingly, and a new condition should be added to that ServiceName or PlanName to indicate that the update happened

can you elaborate on this? GUIDs never change. If you mean, a service as deleted and a new service was created that "looks the same", can you elaborate on how we determine they "look the same" ?

MHBauer commented 7 years ago

Is this something that can be done client side with kubectl extensions?

pmorie commented 7 years ago

@MHBauer check out: https://github.com/kubernetes-incubator/service-catalog/issues/837#issuecomment-300925516

nilebox commented 7 years ago

@arschles can we store those mappings inside ServiceClass (or Broker, not sure which one is more appropriate) instead of having separate resources?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

mszostok commented 4 years ago

/remove-lifecycle rotten /lifecycle frozen

mrbobbytables commented 2 years ago

This project is being archived, closing open issues and PRs. Please see this PR for more information: https://github.com/kubernetes/community/pull/6632