Closed arschles closed 4 years ago
See https://github.com/atlassian/smith/pull/113 and https://github.com/atlassian/smith/pull/114 for inspiration :)
Closing the https://github.com/kubernetes-incubator/service-catalog/pull/1105 PR as I think that we need to revisit the requirements and decide whether we want to support CRDs in Service Catalog
@nilebox Is there anything in particular that blocks using CRDs as a backing for our custom API server? Are the versioning issues still a problem even in this case?
They just added support for Status sub-resources to CRDs but versioning isn't in yet.
Let's be clear here: are we talking about using CRDs as a backing store for the our existing API server, or replacing the API server with CRDs?
P
On Tue, Mar 13, 2018 at 2:39 PM, Carolyn Van Slyck <notifications@github.com
wrote:
They just added support for Status sub-resources to CRDs but versioning isn't in yet.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/service-catalog/issues/1088#issuecomment-372774216, or mute the thread https://github.com/notifications/unsubscribe-auth/AAWXmO7PBkvYT_HYgdw2lamy0PxznXZeks5teBJTgaJpZM4OqZRf .
Oops, I was referring to what's supported if we moved to CRDs (the latter). Never mind! 😊
Well, its a good question. I can't remember who from the main k/k team recommended it, but moving to use CRDs instead of our own API server is an option we may want to discuss at the f2f.
It was Eric Tune.
For the record we - KubeVirt - were also on this question - we actually wrote our own API server before going back to CRDs. The primary reason was data storage for our custom API server. Today CRDs look promising, with validation, initializers and admission controllers you can do quite a lot.
If someone wants to discuss this topic at the f2f, they should do a detailed gap analysis first. We do a lot of things in our API that I am not at all sure are on the roadmap for CRD, let alone currently acheivable.
On Tue, Mar 13, 2018 at 3:42 PM Fabian Deutsch notifications@github.com wrote:
For the record we - KubeVirt - were also on this question - we actually wrote our own API server before going back to CRDs. The primary reason was data storage for our custom API server. Today CRDs look promising, with validation, initializers and admission controllers you can do quite a lot.
— You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/kubernetes-incubator/service-catalog/issues/1088#issuecomment-372793556, or mute the thread https://github.com/notifications/unsubscribe-auth/AAWXmKEIwCkbCTvytQL5n4AjmrYQiPEAks5teCE3gaJpZM4OqZRf .
@pmorie we are interested in bringing up CRDs as a backing store for the our existing API server. We want this to be able to remove the dependency on another etcd instance.
@n3wscott you can take a look at #1105 which had most stuff working, but required quite a lot of code and still had some issues (IIRC watches in kubectl were dropping from time to time for some reason).
I am not sure this is a good idea. The better way would be supporting such storage out of the box in k8s.io/apimachinery
or k8s.io/apiserver
, otherwise it's a PITA to having to support this code.
We do a lot of things in our API that I am not at all sure are on the roadmap for CRD, let alone currently achievable.
Another aspect that with using so many features (admission controllers, validation etc) it might require as much code to use CRDs (if not more), as well as possibly even more complicated code.
So the only real benefit here is no need to manage a dedicated etcd (or share core etcd directly).
I would like to work on this issue if no one else is. #dibs?
@n3wscott sure, but I think it would be better to discuss first whether we do want to bring it back, and if we do what's the best approach.
Also it would be nice to talk to API machinery folks to check whether they have any new recommendations for this problem (e.g. there might be some "blob store" coming in future, or they might want to support such use case in k8s.io/apiserver
out of the box).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
As part of the transition from Third Party Resources (TPRs) to Custom Resource Definitions (CRDs) (see #987 for more detail), we'll need to implement a CRD storage backend, similar to the TPR storage backend. It will likely be possible to copy much of the code from the TPR implementation (at
/pkg/storage/tpr
) to implement CRD storage, but it's important that we don't overwrite the TPR implementation so that we can allow either to be configured until we decide to deprecate and remove TPR support.In addition to the work to implement this storage backend, the API server will need to gain some configuration (via command line flags) to turn on CRD storage.
cc/ @nilebox @mengqiy
Tasks