kubernetes / kubectl

Issue tracker and mirror of kubectl code
Apache License 2.0
2.83k stars 916 forks source link

Support generating CA and self signed certs #86

Closed pwittrock closed 4 years ago

pwittrock commented 6 years ago

Time commitment: 30-60 hours (assuming knowledge of Go and crypto libraries)

We need to be able to generate CA's and self signed certificates and provide them to the Kubernetes apiserver for creating certain resources.

Currently this is done by users who need to run shell commands to generate the certificates. Kubectl should support generating canonical certs for users.

We should write a go library that generates the CA and certs for users

  1. create a library that takes the subject and days as arguments (in a go struct) and returns the 6 results in a datastructure (as []bytes)
  2. add functions to the result struct to get the fields as base64 encoded []byte or string
  3. write a library that can take the result and write out files
  4. expose the library as a cobra command under the kubectl alpha create certificates

Required research:

The library should duplicate the functionality of the following commands

openssl req -x509 -newkey rsa:2048 -keyout certificates/apiserver_ca.key -out certificates/apiserver_ca.crt -days 365 -nodes -subj /C=/ST=/L=/O=/OU=/CN=test-certificate-authority

openssl req -out certificates/apiserver.csr -new -newkey rsa:2048 -nodes -keyout certificates/apiserver.key -subj /C=/ST=/L=/O=/OU=/CN=test.default.svc

openssl x509 -req -days 365 -in certificates/apiserver.csr -CA certificates/apiserver_ca.crt -CAkey certificates/apiserver_ca.key -CAcreateserial -out certificates/apiserver.crt

Library should probably be in kubernetes/common

pwittrock commented 6 years ago

@samanthakem can you take this issue?

samanthakem commented 6 years ago

Yes @pwittrock. Thank you!

pwittrock commented 6 years ago

Excellent. Reach out if you need help.

pwittrock commented 6 years ago

@samanthakem Any updates?

samanthakem commented 6 years ago

@pwittrock sorry for the delay, yes... in fact there are some follow up questions I should have done before.

liggitt commented 6 years ago

what is the intended use here?

samanthakem commented 6 years ago

@liggitt from my understanding, it is supposed to make the process of creating certs more automatic... read this. Let me know if you could answer the questions I made above. Thanks in advance!

liggitt commented 6 years ago

I'm not sure cert-generating functionality belongs in kubectl...

samanthakem commented 6 years ago

@liggitt it will probably be in kubernetes/common as suggested by @pwittrock but the functionality will be used by running kubectl alpha create certificates. Let's wait to see if he get back to us! :smile: What do you have in mind?

pwittrock commented 6 years ago

@liggitt Generating certs is necessary for installing aggregated API servers is something we need common infrastructure for. Apimachinery might be a better spot for these, but there is precedence for creating resource config in kubectl (e.g. kubectl create deployment)

pwittrock commented 6 years ago

cc @droot

liggitt commented 6 years ago

there is precedence for creating resource config in kubectl (e.g. kubectl create deployment)

API resources sure, but not non-API artifacts, right?

pwittrock commented 6 years ago

API resources sure, but not non-API artifacts, right?

apiregistration.k8s.io/v1beta1/APIService is a resource right?

liggitt commented 6 years ago

sure... kubectl could consume a CA to produce that resource (like create configmap --from-file=...), but producing the CA seems out of scope

pwittrock commented 6 years ago

producing the CA seems out of scope

Out of scope for the Kubernetes project or out of scope for this repo? Why wouldn't we do this for the user? It isn't terribly complicated for us to build but greatly improves the UX.

pwittrock commented 6 years ago

kubectl could consume a CA to produce that resource (like create configmap --from-file=...),

create configmap also supports --from-literal=key1=config1

liggitt commented 6 years ago

sure... in both cases, kubectl is not creating the value, it is consuming one provided to it. Embedding a set of CA tools into kubectl is extremely likely to creep in scope. If we make the supported options minimal, they won't support production use cases. If we only support toy use cases, there's no good ramp from "getting started" flows to "run in production" flows. Every case of this I've seen results in expansion of tooling to add more options, and I don't think we want a full-blown CA command subtree in kubectl proper.

pwittrock commented 6 years ago

I don't think we want a full-blown CA command subtree in kubectl proper.

Lets make it a plugin

pwittrock commented 6 years ago

We already need to support this for production use cases, and had to build this for installing Service Catalog. We might as well solve it in one place, instead of making everyone come up with a bespoke solution. Folks can always choose not to use it if it doesn't work for them. That is the reality of developing porcelain.

anguslees commented 6 years ago

I encountered the need to do this just now, also while trying to install Service Catalog. I'd much rather we fixed APIService to no longer require the caBundle inline in the actual k8s resource. (ie: make it possible to separate the declaration of intent from the secret material, so the latter can be generated on initial install by some sort of in-cluster controller)

I mention this here only because I suspect the real fix to this issue is to rephrase the question and remove the originating need. This of course assumes we don't have a whole list of other use cases that also want CA certs (and I believe that's the case).

fejta-bot commented 6 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 6 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten /remove-lifecycle stale

nikhita commented 6 years ago

/remove-lifecycle rotten

This still looks relevant.

seans3 commented 6 years ago

@pwittrock Where are we on this issue now?

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

seans3 commented 5 years ago

/remove-lifecycle rotten

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

seans3 commented 5 years ago

/remove-lifecycle stale

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 5 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

seans3 commented 5 years ago

/remove-lifecycle rotten

fejta-bot commented 4 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 4 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 4 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes/kubectl/issues/86#issuecomment-578437938): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
shnkr commented 11 months ago

/reopen /remove-lifecycle rotten

What's the conclusion here? Looks like it got ignored and closed. Can we close it with proper remarks if there is no plan to add the support?

I guess a link to a documentation would work.

k8s-ci-robot commented 11 months ago

@shnkr: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes/kubectl/issues/86#issuecomment-1798562302): >/reopen >/remove-lifecycle rotten > >What's the conclusion here? Looks like it got ignored and closed. Can we close it with proper remarks if there is no plan to add the support? > >I guess a link to a documentation would work. Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.