Closed raz-bn closed 7 months ago
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-contributor-experience at kubernetes/community. /close
@fejta-bot: Closing this issue.
/reopen
@rikatz: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen /remove-lifecycle rotten /lifecycle active
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
This was fixed in https://github.com/kubernetes-sigs/controller-tools/pull/824
/close
@sbueringer: Closing this issue.
Currently, it is only possible to generate a webhook manifest for a service running inside the cluster. However, I think it should also be possible to generate manifests for webhook running outside the cluster by providing the Validating/MutatingWebhookConfiguration with a URL instead of a service. This use case can be handy when trying to run a local webhook while developing one. I order to achieve this with the current controller-gen, you need to this manual changes:
configurations:
- kustomizeconfig.yaml
The following manifests contain a self-signed issuer CR and a certificate CR.
More document can be found at https://docs.cert-manager.io
WARNING: Targets CertManager 0.11 check https://docs.cert-manager.io/en/latest/tasks/upgrading/index.html for
breaking changes
apiVersion: cert-manager.io/v1alpha2 kind: Issuer metadata: name: selfsigned-issuer namespace: system spec: selfSigned: {}
apiVersion: cert-manager.io/v1alpha2 kind: Certificate metadata: name: serving-cert # this name should match the one appeared in kustomizeconfig.yaml namespace: system spec:
$(SERVICE_NAME) and $(SERVICE_NAMESPACE) will be substituted by kustomize
dnsNames:
Adds namespace to all resources.
namespace: sns-system
Value of this field is prepended to the
names of all resources, e.g. a deployment named
"wordpress" becomes "alices-wordpress".
Note that it should also match with the prefix (text before '-') of the namespace
field above.
namePrefix: sns-
Labels to add to all resources and selectors.
commonLabels:
someName: someValue
bases:
[WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
crd/kustomization.yaml
[CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'. 'WEBHOOK' components are required.
[PROMETHEUS] To enable prometheus monitor, uncomment all sections with 'PROMETHEUS'.
- ../prometheus
patchesStrategicMerge:
Protect the /metrics endpoint by putting it behind auth.
If you want your controller-manager to expose the /metrics
endpoint w/o any authn/z, please comment the following line.
[WEBHOOK] To enable webhook, uncomment all the sections with [WEBHOOK] prefix including the one in
crd/kustomization.yaml
[CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER'.
Uncomment 'CERTMANAGER' sections in crd/kustomization.yaml to enable the CA injection in the admission webhooks.
'CERTMANAGER' needs to be enabled to use ca injection
the following config is for teaching kustomize how to do var substitution
vars:
[CERTMANAGER] To enable cert-manager, uncomment all sections with 'CERTMANAGER' prefix.
- name: SERVICE_NAMESPACE # namespace of the service
objref:
kind: Service
version: v1
name: webhook-service
fieldref:
fieldpath: metadata.namespace
- name: SERVICE_NAME
objref:
kind: Service
version: v1
name: webhook-service