kubernetes-sigs / kubebuilder

Kubebuilder - SDK for building Kubernetes APIs using CRDs
http://book.kubebuilder.io
Apache License 2.0
7.76k stars 1.43k forks source link

Come up with webhook developer workflow to test it locally #400

Closed mengqiy closed 3 years ago

mengqiy commented 5 years ago

There are at least 2 possible solutions:

Document the dev workflow.

mengqiy commented 5 years ago

example proxy servie https://serveo.net/ and https://ngrok.com/

mohnishkodnani commented 5 years ago

I have a minikube running locally and i have mutating web hook to be before pod creation. I get the error Internal error occurred: failed calling admission webhook https://webhook-server-service.default.svc:443/mutating-create-update-pods?timeout=30s: dial tcp 10.104.245.128:443: getsockopt: connection refused The IP listed is the clusterIP of the service. I have the http server running in GoLand IDE.

mengqiy commented 5 years ago

I guess this is how this happened: You have webhook configuration already installed which mutates pod. You can run kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io to check. When you want to bring up the webhook server, it tries to create pods which will be blocked by the admission webhook if the FailurePolicy field is Fail in mutatingwebhookconfigurations.

One thing you can do to fix this is using namespaceSelector to not block the namespace where the webhook server runs.

mohnishkodnani commented 5 years ago

Can you tell me where the hooks are for MutatingWebHookConfiguration object ?

mengqiy commented 5 years ago

Can you tell me where the hooks are for MutatingWebHookConfiguration object ?

https://github.com/kubernetes/api/blob/d04500c8c3dda9c980b668c57abc2ca61efcf5c4/admissionregistration/v1beta1/types.go#L113-L124

ukclivecox commented 5 years ago

Is there an initial workflow for running with Minikube for this yet? Be good to clarify what is required or manual steps. Happy to help.

fejta-bot commented 5 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

alexeldeib commented 4 years ago

/reopen

k8s-ci-robot commented 4 years ago

@alexeldeib: Reopened this issue.

In response to [this](https://github.com/kubernetes-sigs/kubebuilder/issues/400#issuecomment-616208504): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
mengqiy commented 4 years ago

With https://github.com/kubernetes-sigs/controller-runtime/pull/787 released, it should be easier to achieve it now.

alexeldeib commented 4 years ago

I tried option 2 you suggested above using inlets. It worked pretty well, but I didn’t have a chance yet to try to generalize/automate an example if possible.

fejta-bot commented 4 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

luisdavim commented 4 years ago

Is there an example how to use https://github.com/kubernetes-sigs/controller-runtime/pull/787 ?

luisdavim commented 4 years ago

/remove-lifecycle rotten

kensipe commented 4 years ago

@mengqiy I just wrote up what we are doing for our webhook development workflow. perhaps it helps: https://kudo.dev/blog/blog-2020-07-10-webhook-development.html

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

camilamacedo86 commented 3 years ago

Shows that the PR https://github.com/kubernetes-sigs/kubebuilder/pull/1710 also would solve that. @mengqiy, @alexeldeib, @luisdavim could you please give help on the review if this PR and let us know if it can close this issue as well? if not , what should be done here in your POV?

camilamacedo86 commented 3 years ago

I hope that it fine we close this one since the #1710 was merged. However, please feel free to re-open this one or raise a new issue but if it is the case, could you please provide a detailed description of what still expected to achieve this goal?

AmFlint commented 2 years ago

Hello everyone.

For people looking to set up such a workflow in the future, here are the steps I went through to set it up successfully (Controller + Webhooks running locally, and API Server able to reach the webhook for mutation/validation).

First, let me explain how it works:

When using a Webhook to mutate/validate a Custom Resource, the API Server will need to send HTTPS requests to the webhooks's endpoints when a resource is created/updated/deleted, this means:

  1. We need network connectivity from API Server to the Webhook endpoint.
  2. We need to provide TLS certificates to the webhook for the HTTPS communication.

For point 1, I suggest using KinD (Kubernetes in Docker) for local development of Operator. With KinD, as the K8s Control plane (and thus, the API Server) run inside a docker container (kind-control-plane), it is possible to reach your local development environment (where your operator + webhooks are running when you develop locally):

So, we'll be able to configure the API Server to send requests to https://host.docker.internal:9443/<webhook-endpoint> on MacOS and https://172.17.0.1:9443/<webhook-endpoint> on Linux.

Now, for point n 2, it is possible to use self-signed certificates, ideal for local development as it's quite easy to get and set up. In order for the API server to authorize the use of self-signed certificates (signed by unknown authority), we'll need to provide the CA that signed the certificate/key pair to the API Server.

So, here are the steps to do it:

  1. Generate a CA with a local CLI tool, then generate the self-signed certificate/key pair
  2. Configure the K8s WebhookConfiguration resources for your Operator to use your local webhook endpoint
  3. Add a few lines of code in the operator/webhook set up, to use the certificates we generated

1. Generate a CA + TLS certificate/key pair

For this, you can use whichever tool you prefer, but for simplicity reason and 0-configuration experience, I like to use mkcert. So, my examples will be using mkcert.

# run these commands inside your operator project directory

# first, create a directory in which we'll create the certificates
mkdir -p certs

# export the CAROOT env variable, used by the mkcert tool to generate CA and certs
export CAROOT=$(pwd)/certs

# then, install a new CA
# the following command will create 2 files rootCA.pem and rootCA-key.pem
mkcert -install

# then, generate SSL certificates
# here, we're creating certificates valid for both "host.docker.internal" for MacOS and "172.17.0.1" for Linux
# and put them inside the certs/tls.crt and certs/tls.key files (by default the operator/webhook will look for certificates with this naming convention)
mkcert -cert-file=$CAROOT/tls.crt -key-file=$CAROOT/tls.key host.docker.internal 172.17.0.1

Now, you should have a directory certs with rootCA.pem (the PEM-encoded CA that we'll need later on), rootCA-key.pem, tls.crt and tls.key (the key pair needed for the webhook locally).

2. Configure the WebhookConfiguration resources in K8s

Now, we want to create the WebhookConfiguration K8s resources, to tell the API Server how to communicate with the Webhook running locally.

You'll need to copy the rootCA.pem file and encoded it to base64, you can use the following commands:

# for MacOS
cat certs/rootCA.pem | base64

# for Linux
cat certs/rootCA.pem | base64 -w 0

Copy the output of this command, then generate your webhook configuration resources. Here's an example with the mutating webhook, you'll have to do the same for other webhooks.

apiVersion: admissionregistration.k8s.io/v1
kind: MutatingWebhookConfiguration
metadata:
  name: mutating-webhook-configuration
webhooks:
- admissionReviewVersions:
  - v1
  - v1beta1
  # -- THIS IS THE IMPORTANT PART --
  # You need to remove clientConfig.service, and instead use clientConfig.caBundle (paste your base64 encoded CA), and clientConfig.url
  clientConfig:
    caBundle: 
    # if on macos, use host.docker.internal, if linux, 172.17.0.1
    url: https://host.docker.internal:9443/<webhook-endpoint>
  # ...... rest of the config .......

As you can see here, we're removing the clientConfig.service (as we're not using a k8s service since the webhook runs locally, not in the cluster) and instead providing a url that points to our local enpdoint. We also need to provide the base64 encoded CA file to the clientConfig.caBundle property.

Be careful that if you write this config to config/webhook, it will get overwritten everytime you run make. In my project, I created another config directory config/local, where I've created a custom kustomization.yaml file, to import webhook and crds, and apply patches to webhook manifests. This way, I've added a make target that kuztomize build config/local | kubectl apply -f -

3. Configure the Operator/webhook code to use the certificates

Now, we need to run our webhook locally, but tell it to use the certificates we've generated. To do this we can simply add the following lines of code:

// in the function where your webhooks are set up, you need to add the property on the manager.

// HERE the certDirectory variable is set as an environment variable in my project, os.Getenv("WEBHOOK_CERT_DIRECTORY"), this allows us to provide this environment variable in dev, but not in production since it runs in the cluster.
func SetupWebhookWithManager(mgr ctrl.Manager) error {
        // certDirectory variable set to read from environment as explained above in my case
    if certDirectory != "" {
        whs := mgr.GetWebhookServer()
        whs.CertDir = certDirectory
    }

        // ... rest of the code to register the webhook with the manager
    return ctrl.NewWebhookManagedBy(mgr).
        For(...).
        Complete()
}

Now, if you've followed these steps correctly, you can just run:

make run

and you should be able to apply resources to your KinD cluster, and receive the webhook requests locally.

Conclusion

I think it's important that when developping locally, we emulate the whole environment. In the kubebuilder book (tutorial), it is advised to disable the webhook for local development, which I don't think is a good practice.

I would be happy to update the doc with an explaination on how to set it up locally, and update the kubebuilder tutorial or project in general to include make target commands that set everything up cleanly.

zalsader commented 1 year ago

I added the following to my MAKEFILE, which allows me to run the webhooks without it complaining about missing certs.

CERTSDIR=/tmp/k8s-webhook-server/serving-certs
.PHONY: generate-certs
generate-certs: ## Generates the certs required to run webhooks locally
    mkdir -p $(CERTSDIR)
    cd $(CERTSDIR) && \
        openssl genrsa 2048 > tls.key && \
        openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt -subj "/C=XX"

This allows you to do

make generate-certs

Before running

make run
tpoxa commented 1 year ago

@AmFlint Thanks for the tips. It's the only useful information I could find regarding the subject.

In my project, I created another config directory config/local, where I've created a custom kustomization.yaml file

Any chance you could publish that kustomization file? I need to solve the problem with local webhooks quickly. Thank you.

AmFlint commented 1 year ago

Hello @tpoxa, here is the kustomization.yaml:

# Adds namespace to all resources.
namespace: <namespace>

# Value of this field is prepended to the
# names of all resources, e.g. a deployment named
# "wordpress" becomes "alices-wordpress".
# Note that it should also match with the prefix (text before '-') of the namespace
# field above.
namePrefix: <name-prefix>

bases:
  - ../crd
  # crd/kustomization.yaml
  - ../webhook
  # you might not need certmanager, that's what I use in my local config
  - ../certmanager

patchesJson6902:
  - target:
      group: admissionregistration.k8s.io
      version: v1
      kind: MutatingWebhookConfiguration
      name: mutating-webhook-configuration
    path: mutating_webhook_patch.yaml
  - target:
      group: admissionregistration.k8s.io
      version: v1
      kind: ValidatingWebhookConfiguration
      name: validating-webhook-configuration
    path: validating_webhook_patch.yaml
  # We also have conversion webhooks for CRDs when using multi-version APIs if you need, I removed it from this snippet since you might not need it

validating_webhook_patch.yaml:

---
- op: replace
  path: /webhooks/0/clientConfig
  value:
    caBundle: <your-ca-bundle>
   # here, I'm assuming you're on MacOS and using KinD, so the host where your Webhook runs is accessible via host.docker.internal, if you're on Linux, you can replace it with 172.17.0.1
    url: https://host.docker.internal:9443/<your-endpoint>

mutating_webhook_patch.yaml:

---
# THIS FILE IS AUTOMATICALLY GENERATED BY THE MAKE TARGET "make configure-local-webhook"
- op: replace
  path: /webhooks/0/clientConfig
  value:
    caBundle: <ca-bundle>
    # same as above, replace host.docker.internal if on Linux
    url: https://host.docker.internal:9443/<your-endpoint>

To make this work, the kube API Server needs to be able to reach the host running the webhook, so, it works if you run your dev environment in KinD and run your webhook on your laptop with go run for example.

On my side, these *_patch.yaml files are generated by a make command that checks the OS and add the right hostname (host.docker.internal or the linux IP address), and the CA Bundle from the previously generated CA/Certs...

Hopes this helps

tpoxa commented 1 year ago

Thanks @AmFlint it helped. I finally made webhooks working locally! BTW I am using minikube on mac so my URLs are looking somehow like this: https://host.minikube.internal:9443/<endpoint>

weidaolee commented 8 months ago

Thank you @zalsader so mech! This is the easiest way to test webhooks locally. I believe it's the best approach for minimal testing.

I added the following to my MAKEFILE, which allows me to run the webhooks without it complaining about missing certs.

CERTSDIR=/tmp/k8s-webhook-server/serving-certs
.PHONY: generate-certs
generate-certs: ## Generates the certs required to run webhooks locally
  mkdir -p $(CERTSDIR)
  cd $(CERTSDIR) && \
      openssl genrsa 2048 > tls.key && \
      openssl req -new -x509 -nodes -sha256 -days 365 -key tls.key -out tls.crt -subj "/C=XX"

This allows you to do

make generate-certs

Before running

make run