Closed bparees closed 7 years ago
@pmorie ptal.
So the instructions talk about forking, so what will a broker project have to do when the sdk gets updated? Or is most of the work in the 3 areas identified?
How hard would it be to make the sdk vendorable so people don't have to fork it? The fact that there are generated code and a Makefile says it'll be difficult to make this vendorable.
@jmrodri yeah you're reading my mind, this model feels flawed but i'm not sure yet what would be better. vendoring isn't realistic unless we basically make the whole thing more of a pluggable model instead of the current "insert your code here" model.
We sort of did a plugin for our repositories. We have one to talk to dockerhub, one to talk to a mock server, and one that is sourced by a file. And we will probably have one to the RH registry. I'll look to see how "pluggable" it is.
Hey Guys as someone who discovered this project accidentally but is currently interested in and working on service brokers, I will attempt to offer my naive feedback towards what I want as a developer of a service broker (I have created a couple of issues but will repeat some of it here) and also add some questions around this project.
Firstly it is great to see effort being put in to helping service broker developers get started with a broker running on OpenShift/Kubernetes and I am eager to contribute.
This project was particularly interesting as it helped me understand the api server controller setup used by Kubernetes, but what isn't clear is why I need an api server controller setup to create a broker. Are there strong reasons why a developer would want to use this kind of setup over something more simple (closer to what is in the ups broker)?
What I believe to be really useful to authors, and is something this project does, is the setup / generation of the OSB API endpoints, the type definitions encapsulating the different requests that can be made to it by the service catalog and the templates to get it up and running. Additionally I think it would be useful to have a set of catalog objects and a script that could be used to test your new broker set up via the catalog rather than just hitting the broker directly.
My use case is that I want to run a broker in OpenShift that also creates and manages the services in OpenShift. Do you guys imagine this is a common use case or am I an edge case ? One of the things I have found tricky doing the above is using and vendoring the OpenShift client. I know this is on the cards https://trello.com/c/PTDrY0GF/794-13-provide-go-client-similar-to-kubernetes , so I wont say anything further on it.
what isn't clear is why I need an api server controller setup to create a broker. Are there strong reasons why a developer would want to use this kind of setup over something more simple (closer to what is in the ups broker)?
the motivation to have the controller as part of the design was twofold:
1) To make it easy to have async provision flows 2) To follow the standard "eventually consistent" pattern of k8s which relies on controllers to reconcile desired state with current state
But strictly speaking from a broker api perspective, no, it is not necessary, and you are welcome to delete the controller entirely and have the provision.go implementation do all the work. You will also need to re-implement lastoperation.go to properly report the current state of the provision operation.
Additionally I think it would be useful to have a set of catalog objects and a script that could be used to test your new broker set up via the catalog rather than just hitting the broker directly.
The broker SDK does provide a set of catalog objects (hardcoded currently). As for a script to test it, well there are the existing test-scripts that curl the broker api endpoints (though there is certainly room for improvement in terms of making those scripts more useful as a true test framework).
Personally I think tests that go directly against the broker apis are more useful than forcing people to stand up/configure a service catalog to test their broker, though I can certainly see a desire for both (iterative development you probably just want to hit the broker, but in the end you probably want some confirmation that the service catalog can properly communicate with your broker).
I suggest you create a separate issue for "instructions/tests for running the service catalog in front of the broker SDK" and we'll track it there.
My use case is that I want to run a broker in OpenShift that also creates and manages the services in OpenShift. Do you guys imagine this is a common use case or am I an edge case ? One of the things I have found tricky doing the above is using and vendoring the OpenShift client.
I think it's a relatively common use case. We're implementing a template broker ourselves which will instantiate openshift templates on behalf of the user, so two things:
1) maybe your use case can just leverage that instead of creating your own broker (define your service offering in an openshift template, make it available via the template broker, done. This assumes you want your resources to be defined in the requesting user's project, and not some other project).
2) that would be an example of a broker that creates openshift/k8s resources as part of a provision.
However as to your concern about vendoring the k8s client... the broker SDK already has the k8s client vendored in because it acts as a k8s client (against itself) to create the state resources it manages. So you should already have what you need to make calls to the master api server programatically.
@bparees Thank you for the response. This is really useful information, hopefully I am not annoying you with all these questions and comments.
I suggest you create a separate issue for "instructions/tests for running the service catalog in front of the broker SDK" and we'll track it there.
Will do
I think it's a relatively common use case. We're implementing a template broker ourselves which will instantiate openshift templates on behalf of the user.
Yes I have seen this. In its current form it seems a little limited. Particularly around binding and deprovisioning ( I could be wrong here ). It seemed to only support basic auth and had no support for multiple different bindings to the same service. For e.g creating a new database and user password set in a mongo or mysql service. This makes sense as it would mean knowing about the different implementation details and protocols of each service.
However as to your concern about vendoring the k8s client... the broker SDK already has the k8s client vendored in because it acts as a k8s client (against itself) to create the state resources it manages. So you should already have what you need to make calls to the master api server programatically.
Right, my concern was more about vendoring the OpenShift client that depends on Kubernetes. I don't think I can create something like a buildconfig, deploymentconfig or a route Object just using the Kubernetes client, right?
It seemed to only support basic auth and had no support for multiple different bindings to the same service. For e.g creating a new database and user password set in a mongo or mysql service.
the basic auth bit is a limitation of the service catalog api today. The template broker is actually protected by openshift auth, but the service catalog doesn't speak that today so it's a bit of a work in progress. but yes, it's not intended to allow for multiple bindings w/ unique credentials for each binding, so if that's part of your use case, it's not going to be a good fit.
I don't think I can create something like a buildconfig, deploymentconfig or a route Object just using the Kubernetes client, right?
yes that's true. I think we'll have to decide as use cases evolve, what the minimal set of libraries is that the broker SDK should provide out of the box.
One of the things I have found tricky doing the above is using and vendoring the OpenShift client. I know this is on the cards https://trello.com/c/PTDrY0GF/794-13-provide-go-client-similar-to-kubernetes , so I wont say anything further on it.
Yeah, we need an openshift/client-go. It's in our plan, for the reasons you stated.
@bparees travis failure:
hack/fork-rename.sh: appears to be missing "set -o errexit"
@pmorie fixed.
also minor cleanup of controller implementation
fixes https://github.com/openshift/open-service-broker-sdk/issues/17