Closed pwittrock closed 4 years ago
@samanthakem can you take this issue?
Yes @pwittrock. Thank you!
Excellent. Reach out if you need help.
@samanthakem Any updates?
@pwittrock sorry for the delay, yes... in fact there are some follow up questions I should have done before.
Where the certificates
folder would be located within the k8s context?
Would you like the kubectl alpha create certificates
to abstract all of the three mentioned commands at once or would be something like kubectl alpha create certificates X
, kubectl alpha create certificates Y
, kubectl alpha create certificates Z
for each?
Lemme know if you have any questions about what I just said.
what is the intended use here?
@liggitt from my understanding, it is supposed to make the process of creating certs more automatic... read this. Let me know if you could answer the questions I made above. Thanks in advance!
I'm not sure cert-generating functionality belongs in kubectl...
@liggitt it will probably be in kubernetes/common
as suggested by @pwittrock but the functionality will be used by running kubectl alpha create certificates
. Let's wait to see if he get back to us! :smile: What do you have in mind?
@liggitt Generating certs is necessary for installing aggregated API servers is something we need common infrastructure for. Apimachinery might be a better spot for these, but there is precedence for creating resource config in kubectl (e.g. kubectl create deployment
)
cc @droot
there is precedence for creating resource config in kubectl (e.g. kubectl create deployment)
API resources sure, but not non-API artifacts, right?
API resources sure, but not non-API artifacts, right?
apiregistration.k8s.io/v1beta1/APIService is a resource right?
sure... kubectl could consume a CA to produce that resource (like create configmap --from-file=...
), but producing the CA seems out of scope
producing the CA seems out of scope
Out of scope for the Kubernetes project or out of scope for this repo? Why wouldn't we do this for the user? It isn't terribly complicated for us to build but greatly improves the UX.
kubectl could consume a CA to produce that resource (like create configmap --from-file=...),
create configmap also supports --from-literal=key1=config1
sure... in both cases, kubectl is not creating the value, it is consuming one provided to it. Embedding a set of CA tools into kubectl is extremely likely to creep in scope. If we make the supported options minimal, they won't support production use cases. If we only support toy use cases, there's no good ramp from "getting started" flows to "run in production" flows. Every case of this I've seen results in expansion of tooling to add more options, and I don't think we want a full-blown CA command subtree in kubectl proper.
I don't think we want a full-blown CA command subtree in kubectl proper.
Lets make it a plugin
We already need to support this for production use cases, and had to build this for installing Service Catalog. We might as well solve it in one place, instead of making everyone come up with a bespoke solution. Folks can always choose not to use it if it doesn't work for them. That is the reality of developing porcelain.
I encountered the need to do this just now, also while trying to install Service Catalog. I'd much rather we fixed APIService to no longer require the caBundle
inline in the actual k8s resource. (ie: make it possible to separate the declaration of intent from the secret material, so the latter can be generated on initial install by some sort of in-cluster controller)
I mention this here only because I suspect the real fix to this issue is to rephrase the question and remove the originating need. This of course assumes we don't have a whole list of other use cases that also want CA certs (and I believe that's the case).
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale
/remove-lifecycle rotten
This still looks relevant.
@pwittrock Where are we on this issue now?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
/reopen /remove-lifecycle rotten
What's the conclusion here? Looks like it got ignored and closed. Can we close it with proper remarks if there is no plan to add the support?
I guess a link to a documentation would work.
@shnkr: You can't reopen an issue/PR unless you authored it or you are a collaborator.
Time commitment: 30-60 hours (assuming knowledge of Go and crypto libraries)
We need to be able to generate CA's and self signed certificates and provide them to the Kubernetes apiserver for creating certain resources.
Currently this is done by users who need to run shell commands to generate the certificates. Kubectl should support generating canonical certs for users.
We should write a go library that generates the CA and certs for users
kubectl alpha create certificates
Required research:
The library should duplicate the functionality of the following commands
Library should probably be in kubernetes/common