kubernetes / enhancements

Enhancements tracking repo for Kubernetes
Apache License 2.0
3.46k stars 1.49k forks source link

contextual logging #3077

Open pohly opened 2 years ago

pohly commented 2 years ago

Enhancement Description


Current configuration

https://github.com/kubernetes/kubernetes/blob/master/hack/logcheck.conf

Status

The following table counts log calls that need to be converted. The numbers for contextual logging include those for structured logging.

At this point, controllers could get converted to contextual logging or one of the components that was already converted to structured logging. If you want to pick one, ping @pohly on the #wg-structured-logging Slack channel. See structured and contextual logging migration instructions for guidance.

Besides migrating log calls, we also might have to migrate from APIs which don't support contextual logging to APIs which do:

From 2022-10-27 ~= Kubernetes 1.26

The focus was on converting kube-controller-manager. Of 1944 unstructured and/or non-contexual logging calls in pkg/controller and cmd/kube-controller-manager, 82% were converted to structured, contextual logging in Kubernetes 1.27.

Component | Non-Structured Logging | Non-Contextual Logging| Owner ------ | ------- | ------ | ---------------- pkg/controller/bootstrap | 15 | 28 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/113464 pkg/controller/certificates | 22 | 31 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/113994 pkg/controller/clusterroleaggregation | 2 | 2 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/113910 pkg/controller/cronjob | 1 | 44 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/113428 pkg/controller/daemon | 45 | 85 | @249043822, https://github.com/kubernetes/kubernetes/pull/113622 pkg/controller/deployment | 23 | 79 | @249043822, https://github.com/kubernetes/kubernetes/pull/113525 pkg/controller/disruption | 29 | 56 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/116021 pkg/controller/endpoint | 12 | 24 | lunhuijie (Slack) pkg/controller/endpointslice | 22 | 36 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/115295 pkg/controller/endpointslicemirroring | 18 | 28 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/114982 pkg/controller/garbagecollector | 55 | 105 | @ncdc, https://github.com/kubernetes/kubernetes/pull/113471 pkg/controller/job | 12 | 36 | was: @sanwishe, https://github.com/kubernetes/kubernetes/pull/113576, now: @mengjiao-liu pkg/controller/namespace | 30 | 55 | @yangjunmyfm192085, https://github.com/kubernetes/kubernetes/pull/113443 pkg/controller/nodeipam | 135 | 210 | @yangjunmyfm192085, https://github.com/kubernetes/kubernetes/pull/112670 pkg/controller/nodelifecycle | 60 | 106 | @yangjunmyfm192085, https://github.com/kubernetes/kubernetes/pull/112670 pkg/controller/podautoscaler | 9 | 13 | @freddie400, https://github.com/kubernetes/kubernetes/pull/114687 pkg/controller/podgc | 10 | 24 | @pravarag, https://github.com/kubernetes/kubernetes/pull/114689 pkg/controller/replicaset | 20 | 49 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/114871 pkg/controller/resourcequota | 24 | 37 | @ncdc, https://github.com/kubernetes/kubernetes/pull/113315 pkg/controller/serviceaccount | 22 | 31 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/114918 pkg/controller/statefulset | 19 | 59 | @249043822, https://github.com/kubernetes/kubernetes/pull/113840 pkg/controller/storageversiongc | 4 | 6 | @songxiao-wang87, https://github.com/kubernetes/kubernetes/pull/113986 pkg/controller/testutil | 9 | 9 | @Octopusjust, https://github.com/kubernetes/kubernetes/pull/114061 pkg/controller/ttl | 4 | 8 | wxs (Slack) = @songxiao-wang87, https://github.com/kubernetes/kubernetes/pull/113916 pkg/controller/ttlafterfinished | 9 | 15 | @obaranov1, https://github.com/kubernetes/kubernetes/pull/115332 pkg/controller/util | 0 | 19 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/115049 pkg/controller/volume | 351 | 673 | @yangjunmyfm192085, https://github.com/kubernetes/kubernetes/pull/113584 pkg/kubelet | 1 | 1805 | @fmuyassarov pkg/scheduler | 0 | 348 | @knelasevero, https://github.com/kubernetes/kubernetes/pull/111155 staging/src/k8s.io/apiextensions-apiserver | 57 | 81 staging/src/k8s.io/apimachinery | 73 | 114 | @yanjing1104 staging/src/k8s.io/apiserver | 262 | 543 staging/src/k8s.io/client-go | 161 | 267 staging/src/k8s.io/cloud-provider | 108 | 146 staging/src/k8s.io/cluster-bootstrap | 2 | 4 staging/src/k8s.io/code-generator | 108 | 168 staging/src/k8s.io/component-base | 32 | 63 staging/src/k8s.io/component-helpers | 7 | 8 staging/src/k8s.io/controller-manager | 10 | 10 staging/src/k8s.io/csi-translation-lib | 3 | 4 staging/src/k8s.io/kube-aggregator | 52 | 76 staging/src/k8s.io/kube-controller-manager | 0 | 0 staging/src/k8s.io/kubectl | 89 | 147 | @yanjing1104 staging/src/k8s.io/legacy-cloud-providers | 1445 | 2238 staging/src/k8s.io/mount-utils | 54 | 92 staging/src/k8s.io/pod-security-admission | 1 | 34 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/114471 staging/src/k8s.io/sample-controller | 16 | 22 | @pchan, https://github.com/kubernetes/kubernetes/pull/113879

From 2023-03-17 = Kubernetes v1.27.0-beta.0

All of kube-controller-manager got converted.

Tables created with: ``` go install sigs.k8s.io/logtools/logcheck@latest echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && \ echo "------ | ------- | ------" && \ for i in $(find pkg/controller/* pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet -maxdepth 0 -type d | sort); do \ echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; \ done ``` Component | Non-Structured Logging | Non-Contextual Logging | Owner ------ | ------- | ------ | ------ cmd/kube-apiserver | 7 | 8 | on hold cmd/kubelet | 0 | 47 | @fmuyassarov cmd/kube-proxy | 0 | 46 | on hold pkg/controller/certificates | 22 | 31 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/113994 pkg/controller/deployment | 2 | 5 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/controller/disruption | 29 | 54 | ~~@obaranov1, https://github.com/kubernetes/kubernetes/pull/116021~~, @mengjiao-liu, https://github.com/kubernetes/kubernetes/pull/119147 pkg/controller/endpoint | 12 | 24 | @my-git9, https://github.com/kubernetes/kubernetes/pull/116755 pkg/controller/endpointslice | 20 | 35 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/115295 pkg/controller/endpointslicemirroring | 18 | 28 | @Namanl2001, https://github.com/kubernetes/kubernetes/pull/114982 pkg/controller/garbagecollector | 3 | 3 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/controller/job | 12 | 35 | @sanwishe, https://github.com/kubernetes/kubernetes/pull/113576 (needs new owner?) pkg/controller/nodeipam | 8 | 13 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/controller/podgc | 10 | 24 | ~~@pravarag, https://github.com/kubernetes/kubernetes/pull/114689~~, @pohly, https://github.com/kubernetes/kubernetes/pull/119250 pkg/controller/replicaset | 9 | 18 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/controller/statefulset | 3 | 5 | @kerthcet, https://github.com/kubernetes/kubernetes/pull/118071 pkg/controller/testutil | 9 | 9 | @Octopusjust, https://github.com/kubernetes/kubernetes/pull/114061 pkg/controller/util | 0 | 4 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/controller/volume | 5 | 20 | @fatsheep9146, https://github.com/kubernetes/kubernetes/pull/116930 pkg/kubelet | 2 | 1923 | @fmuyassarov, https://github.com/kubernetes/kubernetes/pull/114352 pkg/scheduler | 2 | 349 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/issues/91633

From 2023-09-18 =~ Kubernetes v1.28

Component | Non-Structured Logging | Non-Contextual Logging | Owner ------ | ------- | ------- | ------ cmd/kube-apiserver | 6 | 7 | on hold cmd/kubelet | 0 | 52 | @fmuyassarov (?), https://github.com/kubernetes/kubernetes/pull/114352 cmd/kube-proxy | 0 | 41 | on hold pkg/kubelet | 2 | 1942 | @fmuyassarov (?) pkg/scheduler | 1 | 137 | @mengjiao-liu, https://github.com/kubernetes/kubernetes/pulls/mengjiao-liu staging/src/k8s.io/apiserver | ? | ? | @tallclair, https://github.com/kubernetes/kubernetes/pull/114198 staging/src/k8s.io/client-go/discovery | 11 | 21 | on hold staging/src/k8s.io/client-go/examples | 14 | 14 | on hold staging/src/k8s.io/client-go/metadata | 2 | 4 | on hold staging/src/k8s.io/client-go/plugin | 5 | 8 | on hold staging/src/k8s.io/client-go/rest | 16 | 37 | on hold staging/src/k8s.io/client-go/restmapper | 3 | 6 | on hold staging/src/k8s.io/client-go/tools | 104 | 171 | @pohly, https://github.com/kubernetes/kubernetes/pull/120729 staging/src/k8s.io/client-go/transport | 17 | 31 | on hold staging/src/k8s.io/client-go/util | 12 | 19 | on hold Table created manually and with: ``` go install sigs.k8s.io/logtools/logcheck@latest echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && \ echo "------ | ------- | ------- | ------" && \ for i in $(find pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet staging/src/k8s.io/client-go/* -maxdepth 0 -type d | sort); do \ echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; \ done ```

From 2023-11-20 =~ Kubernetes v1.29

Component Non-Structured Logging Non-Contextual Logging Owner
cmd/kube-apiserver 6 7 @tallclair
cmd/kubelet 0 52 @fmuyassarov (?), https://github.com/kubernetes/kubernetes/pull/114352
pkg/kubelet 2 1983 @fmuyassarov
cmd/kube-proxy 0 42 @ fatsheep9146, https://github.com/kubernetes/kubernetes/pull/122197
pkg/proxy 0 360 @ fatsheep9146, see above
staging/src/k8s.io/apiserver 285 655 @tallclair, https://github.com/kubernetes/kubernetes/pull/114198
staging/src/k8s.io/client-go/discovery 11 21
staging/src/k8s.io/client-go/examples 14 14
staging/src/k8s.io/client-go/metadata 2 4
staging/src/k8s.io/client-go/plugin 5 8
staging/src/k8s.io/client-go/rest 16 37
staging/src/k8s.io/client-go/restmapper 3 6
staging/src/k8s.io/client-go/tools 83 143 @pohly
staging/src/k8s.io/client-go/transport 17 31
staging/src/k8s.io/client-go/util 12 19

Table created with:

go install sigs.k8s.io/logtools/logcheck@latest

echo "Component | Non-Structured Logging | Non-Contextual Logging | Owner " && echo "------ | ------- |  ------- | ------" && for i in $(find pkg/scheduler pkg/kubelet pkg/apis pkg/api cmd/kube-* cmd/kubelet staging/src/k8s.io/client-go/* staging/src/k8s.io/apiserver -maxdepth 0 -type d | sort); do      echo "$i | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false 2>&1 ./... | wc -l ) | $(cd $i; ${GOPATH}/bin/logcheck -check-structured -check-deprecations=false -check-contextual ./... 2>&1 | wc -l ) |" | grep -v '| 0 | 0 |'; done
pohly commented 2 years ago

/sig instrumentation /wg structured-logging

hosseinsalahi commented 2 years ago

Hello @pohly

v1.24 Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd, 2022. This enhancement is targeting alpha for v1.24,

Here’s where this enhancement currently stands:

The status of this enhancement is marked as tracked. Please keep the issue description and the targeted stage up-to-date for release v1.24. Thanks!

pohly commented 2 years ago

@encodeflush : the KEP PR was merged, all criteria for alpha in 1.24 should be met now.

chrisnegus commented 2 years ago

Hi @pohly πŸ‘‹ 1.24 Docs shadow here.

This enhancement is marked as 'Needs Docs' for the 1.24 release.

Please follow the steps detailed in the documentation to open a PR against the dev-1.24 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thu March 31, 11:59 PM PDT.

Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thanks!

valaparthvi commented 2 years ago

Hi @pohly :wave: 1.24 Release Comms team here.

We have an opt-in process for the feature blog delivery. If you would like to publish a feature blog for this issue in this cycle, then please opt in on this tracking sheet.

The deadline for submissions and the feature blog freeze is scheduled for 01:00 UTC Wednesday 23rd March 2022 / 18:00 PDT Tuesday 22nd March 2022. Other important dates for delivery and review are listed here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24#timeline.

For reference, here is the blog for 1.23.

Please feel free to reach out any time to me or on the #release-comms channel with questions or comments.

Thanks!

hosseinsalahi commented 2 years ago

Hello @pohly

I'm just checking in once more as we approach the 1.24 Code Freeze on 18:00 PDT, Tuesday, March 29th 2022

Please ensure the following items are completed:

For note, the status of this enhancement is currently marked as tracked.

Thank you!

pohly commented 2 years ago

/assign

pohly commented 2 years ago

I have added two doc PRs to the description.

Priyankasaggu11929 commented 2 years ago

/milestone clear

logicalhan commented 2 years ago

@pohly can we close this?

logicalhan commented 2 years ago

/assign @serathius

pohly commented 2 years ago

We still need to move this feature through beta to GA, so it has to stay open.

No work is planned for 1.25. I suggest we try to go for beta in 1.26.

k8s-triage-robot commented 2 years ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

pohly commented 2 years ago

/remove-lifecycle stale

logicalhan commented 2 years ago

Are you guys still planning on beta for 1.26?

pohly commented 2 years ago

The instrumentation of kube-scheduler with contextual logging is still in progress. That would have been the first real confirmation of the concept.

So no, let's delay at least until 1.27.

shivanshuraj1333 commented 2 years ago

/assign

atiratree commented 1 year ago

KCM / CCM controller aliases should help with consistent component names (eventually once we do the wiring)

https://github.com/kubernetes/kubernetes/pull/115813

gurpreet-legend commented 1 year ago

I would really like to work on the component pkg/controller/deployment, can someone assign this to me?

pohly commented 1 year ago

Can you also do pkg/controller/endpoint, pkg/controller/garbagecollector, pkg/controller/nodeipam, pkg/controller/replicaset, pkg/controller/statefulset, pkg/controller/util, pkg/controller/volume?

That can be a single cleanup PR because those components were already converted earlier.

gurpreet-legend commented 1 year ago

Can you also do pkg/controller/endpoint, pkg/controller/garbagecollector, pkg/controller/nodeipam, pkg/controller/replicaset, pkg/controller/statefulset, pkg/controller/util, pkg/controller/volume?

That can be a single cleanup PR because those components were already converted earlier.

@pohly, will be happy to work on those as well :)

pohly commented 1 year ago

@gurpreet-legend: I added you to the table.

It looks like pkg/controller/endpoint hasn't been touched at all yet. Then do that in one PR and the cleanup changes in another.

sadityakumar9211 commented 1 year ago

I am really interested in working on cmd/kube-proxy component. Can someone please assign this to me?

GLVSKiriti commented 1 year ago

I would like to work on this component: cmd/kube-apiserver .Can someone assign this to me.

esthejas commented 1 year ago

I would really like to work on this component cmd/kube-apiserver.Can someone please assign this to me

gurpreet-legend commented 1 year ago

@gurpreet-legend: I added you to the table.

It looks like pkg/controller/endpoint hasn't been touched at all yet. Then do that in one PR and the cleanup changes in another.

Sure, will work on that first and then will work on the other components :)

ricardoapl commented 1 year ago

Hello @pohly

Can I convert staging/src/k8s.io/client-go/metadata?

Perhaps I can do a couple more afterwards

Thank you

pohly commented 12 months ago

@ricardoapl: that seems like an easy package to get started with. Please go ahead.

freddie400 commented 12 months ago

Hi @pohly, Can I work on migrating staging/src/k8s.io/client-go/transport & staging/src/k8s.io/client-go/util? Thanks.

Rei1010 commented 11 months ago

hi @pohly , can I work on staging/src/k8s.io/client-go/plugin ?

wlq1212 commented 11 months ago

hi @pohly , can I work on staging/src/k8s.io/client-go/restmapper ?

ricardoapl commented 11 months ago

I can migrate staging/src/k8s.io/client-go/discovery next if that's OK

WillardHu commented 11 months ago

Hi @pohly, can I work on migrating staging/src/k8s.io/client-go/rest, thanks

yanjing1104 commented 11 months ago

Hi @pohly Sry for the inconvenience. but I won't be able to follow these two PRs and have closed them, could you help with re-assign/release these two features? staging/src/k8s.io/apimachinery (https://github.com/kubernetes/kubernetes/pull/115317) staging/src/k8s.io/kubectl (https://github.com/kubernetes/kubernetes/pull/115087)

pohly commented 11 months ago

Please wait with submitting PRs. I need to more time to actually look at some of the packages before I can provide guidance on how to proceed.

WillardHu commented 11 months ago

Some design discussion of component cliant-go/rest migration to contextual logging

  1. Usually the client-go returns the rest.Request and the rest.Result for the caller to use. The creation of their instances is controlled by the client-go. Can we define the logger as a struct field? like:

    type Request struct {
        ...
        logger *klog.Logger
        ...
    }
    
    func (r *Request) Fun() {
        r.log().Info(...)
    }
    
    // If the caller does not define it, a default one is returned
    func (r *Request) log() klog.Logger {
        if r.logger == nil {
            return klog.Background().WithName("rest_request")
        }
        return *r.logger
    }
  2. The creation of the rest.Config is controlled by the caller, we add a context parameter for some functions used to build rest.Config, like:

    • Directly modify:
      func InClusterConfig(ctx context.Context) (*Config, error) {
          logger := klog.FromContext(ctx).WithName("rest_config")
          ...
      }
    • Consider compatibility:

      func InClusterConfig() (*Config, error) {
          return InClusterConfigWithContext(context.TODO())
      }
      
      func InClusterConfigWithContext(ctx context.Context) (*Config, error) {
          logger := klog.FromContext(ctx).WithName("rest_config")
          ...
      }

      Should we consider compatibility?

  3. Can other structs used internally refer to question 1, and can we add a context parameter for some internal use functions?
pohly commented 11 months ago

Let me elaborate further... the problem with changing client-go or any other package under staging is that we cannot simply change an API. It breaks to much existing code. Instead, we have to extend the API in a backwards compatible way. Adding a klog.TODO or some other TODO remark doesn't help because it doesn't solve the API problem.

But our work isn't done at that point. Adding a new API is pointless if it doesn't get used by the Kubernetes components that we maintain. Out-of-tree components may also want to know that they should switch to the new API. Adding a "Deprecated" remark is too strong, the existing APIs are fine.

What I came up with is //logcheck:context as a special remark that tells logcheck to complain about an API, but only in code which cares about contextual logging. This isn't a solution in all cases, but at least when adding a WithContext variant it works.

So whoever now starts converting some package first has to look at existing usage of an API and then figure out how to change the API and that code - this is not easy, so beware before signing up to do this!

WillardHu commented 11 months ago

Thanks for your guidance, I combed through structs and functions call relationships in rest packages.

  1. I add a logger field to the Request{} and Result{} structs and use methods to control their behavior:

    type Request struct {
        ...
        // logger you can set it using the SetLoggerFromContext(..) method.
        logger *logr.Logger
    }
    
    // SetLoggerWithContext retrieves a logger set by the caller and creates a new logger
    // with the constant name for the Reqeust's field logger.
    func (r *Request) SetLoggerFromContext(ctx context.Context) {
        inst := klog.FromContext(ctx).WithName(loggerNameRequest)
        r.logger = &inst
    }
    
    // log returns a not nil logger to be used by the methods. If the logger field is not defined,
    // sets a default logger for it.
    func (r *Request) log() logr.Logger {
        if r.logger == nil {
            def := klog.Background().WithName(loggerNameRequest)
            r.logger = &def
        }
        return *r.logger
    }

    Request public methods can use r.log() to write structured logging, and some private methods can use it to build contextual logging to call internal functions and tool methods.

  2. They are internal functions and tool methods for the Request{} in urlbackoff.go, warnings.go and with_retry.go , so they can be converted to contextual logging.
  3. These funcionts in plugin.go are called by AuthProvider implementations init(), so add some comment tags //logcheck:context to these functions.
  4. Some of Config's creation functions are used infrequently, perhaps only once per project, so I figured callers wouldn't care much about the contextual logging, so add the commont tag //logcheck:context too.

I tried to revise my PR again, please check whether it is what you expected, thank you.

pohly commented 11 months ago

@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.

WillardHu commented 11 months ago

@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.

OK, thanks

dashpole commented 10 months ago

/label lead-opted-in

tjons commented 10 months ago

Hello @pohly πŸ‘‹, Enhancements team here.

Just checking in as we approach enhancements freeze on Friday, February 9th, 2024 at 02:00 UTC.

This enhancement is targeting for stage beta for 1.30 (correct me, if otherwise)

Here's where this enhancement currently stands:

For this KEP, we would just need to complete the following:

The status of this enhancement is marked as at risk for enhancement freeze. Please keep the issue description up-to-date with appropriate stages as well. Thank you!

pohly commented 10 months ago

KEP PR for 1.30 got merged.

tjons commented 9 months ago

Hey @pohly - with all the requirements fulfilled this enhancement is now marked as tracked for the upcoming enhancements freeze πŸš€! Thanks for your hard work

tjons commented 9 months ago

Hello πŸ‘‹, 1.30 Enhancements team here.

Unfortunately, this enhancement did not meet requirements for enhancements freeze.

I made an error, this question under scalability is now required in the KEP: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template#can-enabling--using-this-feature-result-in-resource-exhaustion-of-some-node-resources-pids-sockets-inodes-etc

If you still wish to progress this enhancement in 1.30, please file an exception request. Thanks!

k8s-ci-robot commented 9 months ago

@tjons: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility.

In response to [this](https://github.com/kubernetes/enhancements/issues/3077#issuecomment-1935240305): >/milestone clear Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
salehsedghpour commented 9 months ago

/milestone clear

drewhagen commented 9 months ago

Hello @pohly πŸ‘‹, 1.30 Docs Lead here.

Does this enhancement work planned for 1.30 require any new docs or modification to existing docs? If so, please follows the steps here to open a PR against dev-1.30 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday February 22nd 2024 18:00 PDT.

Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release. Thank you!

natalisucks commented 9 months ago

Hi @pohly, @shivanshu1333, and @serathius,

πŸ‘‹ from the v1.30 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement!

We encourage blogs for features including, but not limited to: breaking changes, features and changes important to our users, and features that have been in progress for a long time and are graduating.

To opt in, you need to open a Feature Blog placeholder PR against the website repository. The placeholder PR deadline is 27th February, 2024.

Here's the 1.30 Release Calendar

pohly commented 9 months ago

Doc PR for 1.30 created and linked to in the description