kubernetes-retired / service-catalog

Consume services in Kubernetes using the Open Service Broker API
https://svc-cat.io
Apache License 2.0
1.05k stars 384 forks source link

Implement Custom Resource Definitions storage backend #1088

Closed arschles closed 4 years ago

arschles commented 7 years ago

As part of the transition from Third Party Resources (TPRs) to Custom Resource Definitions (CRDs) (see #987 for more detail), we'll need to implement a CRD storage backend, similar to the TPR storage backend. It will likely be possible to copy much of the code from the TPR implementation (at /pkg/storage/tpr) to implement CRD storage, but it's important that we don't overwrite the TPR implementation so that we can allow either to be configured until we decide to deprecate and remove TPR support.

In addition to the work to implement this storage backend, the API server will need to gain some configuration (via command line flags) to turn on CRD storage.

cc/ @nilebox @mengqiy

Tasks

ash2k commented 7 years ago

See https://github.com/atlassian/smith/pull/113 and https://github.com/atlassian/smith/pull/114 for inspiration :)

vaikas commented 7 years ago

https://kubernetes.io/docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/

nilebox commented 6 years ago

Closing the https://github.com/kubernetes-incubator/service-catalog/pull/1105 PR as I think that we need to revisit the requirements and decide whether we want to support CRDs in Service Catalog

kibbles-n-bytes commented 6 years ago

@nilebox Is there anything in particular that blocks using CRDs as a backing for our custom API server? Are the versioning issues still a problem even in this case?

carolynvs commented 6 years ago

They just added support for Status sub-resources to CRDs but versioning isn't in yet.

pmorie commented 6 years ago

Let's be clear here: are we talking about using CRDs as a backing store for the our existing API server, or replacing the API server with CRDs?

P

On Tue, Mar 13, 2018 at 2:39 PM, Carolyn Van Slyck <notifications@github.com

wrote:

They just added support for Status sub-resources to CRDs but versioning isn't in yet.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/service-catalog/issues/1088#issuecomment-372774216, or mute the thread https://github.com/notifications/unsubscribe-auth/AAWXmO7PBkvYT_HYgdw2lamy0PxznXZeks5teBJTgaJpZM4OqZRf .

carolynvs commented 6 years ago

Oops, I was referring to what's supported if we moved to CRDs (the latter). Never mind! 😊

duglin commented 6 years ago

Well, its a good question. I can't remember who from the main k/k team recommended it, but moving to use CRDs instead of our own API server is an option we may want to discuss at the f2f.

carolynvs commented 6 years ago

It was Eric Tune.

fabiand commented 6 years ago

For the record we - KubeVirt - were also on this question - we actually wrote our own API server before going back to CRDs. The primary reason was data storage for our custom API server. Today CRDs look promising, with validation, initializers and admission controllers you can do quite a lot.

pmorie commented 6 years ago

If someone wants to discuss this topic at the f2f, they should do a detailed gap analysis first. We do a lot of things in our API that I am not at all sure are on the roadmap for CRD, let alone currently acheivable.

On Tue, Mar 13, 2018 at 3:42 PM Fabian Deutsch notifications@github.com wrote:

For the record we - KubeVirt - were also on this question - we actually wrote our own API server before going back to CRDs. The primary reason was data storage for our custom API server. Today CRDs look promising, with validation, initializers and admission controllers you can do quite a lot.

— You are receiving this because you commented.

Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/service-catalog/issues/1088#issuecomment-372793556, or mute the thread https://github.com/notifications/unsubscribe-auth/AAWXmKEIwCkbCTvytQL5n4AjmrYQiPEAks5teCE3gaJpZM4OqZRf .

n3wscott commented 6 years ago

@pmorie we are interested in bringing up CRDs as a backing store for the our existing API server. We want this to be able to remove the dependency on another etcd instance.

duglin commented 6 years ago

For reference: https://docs.google.com/presentation/d/1IiKOIBbw7oaD4uZ-kNE-mA3cliFjx9FuFLd7pj-dj1Y/edit?ts=5a78bb75#slide=id.g30d931056f_0_12

nilebox commented 6 years ago

@n3wscott you can take a look at #1105 which had most stuff working, but required quite a lot of code and still had some issues (IIRC watches in kubectl were dropping from time to time for some reason). I am not sure this is a good idea. The better way would be supporting such storage out of the box in k8s.io/apimachinery or k8s.io/apiserver, otherwise it's a PITA to having to support this code.

nilebox commented 6 years ago

We do a lot of things in our API that I am not at all sure are on the roadmap for CRD, let alone currently achievable.

Another aspect that with using so many features (admission controllers, validation etc) it might require as much code to use CRDs (if not more), as well as possibly even more complicated code.

So the only real benefit here is no need to manage a dedicated etcd (or share core etcd directly).

n3wscott commented 6 years ago

I would like to work on this issue if no one else is. #dibs?

nilebox commented 6 years ago

@n3wscott sure, but I think it would be better to discuss first whether we do want to bring it back, and if we do what's the best approach.

Also it would be nice to talk to API machinery folks to check whether they have any new recommendations for this problem (e.g. there might be some "blob store" coming in future, or they might want to support such use case in k8s.io/apiserver out of the box).

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/service-catalog/issues/1088#issuecomment-544254863): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mszostok commented 4 years ago

done by: https://github.com/kubernetes-sigs/service-catalog/issues/2633