Open pohly opened 2 years ago
/sig instrumentation /wg structured-logging
Hello @pohly
v1.24 Enhancements team here.
Just checking in as we approach enhancements freeze on 18:00pm PT on Thursday Feb 3rd, 2022. This enhancement is targeting alpha
for v1.24,
Hereβs where this enhancement currently stands:
The status of this enhancement is marked as tracked
. Please keep the issue description and the targeted stage up-to-date for release v1.24.
Thanks!
@encodeflush : the KEP PR was merged, all criteria for alpha in 1.24 should be met now.
Hi @pohly π 1.24 Docs shadow here.
This enhancement is marked as 'Needs Docs' for the 1.24 release.
Please follow the steps detailed in the documentation to open a PR against the dev-1.24 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thu March 31, 11:59 PM PDT.
Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.
Thanks!
Hi @pohly :wave: 1.24 Release Comms team here.
We have an opt-in process for the feature blog delivery. If you would like to publish a feature blog for this issue in this cycle, then please opt in on this tracking sheet.
The deadline for submissions and the feature blog freeze is scheduled for 01:00 UTC Wednesday 23rd March 2022 / 18:00 PDT Tuesday 22nd March 2022. Other important dates for delivery and review are listed here: https://github.com/kubernetes/sig-release/tree/master/releases/release-1.24#timeline.
For reference, here is the blog for 1.23.
Please feel free to reach out any time to me or on the #release-comms channel with questions or comments.
Thanks!
Hello @pohly
I'm just checking in once more as we approach the 1.24 Code Freeze on 18:00 PDT, Tuesday, March 29th 2022
Please ensure the following items are completed:
For note, the status of this enhancement is currently marked as tracked
.
Thank you!
/assign
I have added two doc PRs to the description.
/milestone clear
@pohly can we close this?
/assign @serathius
We still need to move this feature through beta to GA, so it has to stay open.
No work is planned for 1.25. I suggest we try to go for beta in 1.26.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Are you guys still planning on beta for 1.26?
The instrumentation of kube-scheduler with contextual logging is still in progress. That would have been the first real confirmation of the concept.
So no, let's delay at least until 1.27.
/assign
KCM / CCM controller aliases should help with consistent component names (eventually once we do the wiring)
I would really like to work on the component pkg/controller/deployment
, can someone assign this to me?
Can you also do pkg/controller/endpoint, pkg/controller/garbagecollector, pkg/controller/nodeipam, pkg/controller/replicaset, pkg/controller/statefulset, pkg/controller/util, pkg/controller/volume?
That can be a single cleanup PR because those components were already converted earlier.
Can you also do pkg/controller/endpoint, pkg/controller/garbagecollector, pkg/controller/nodeipam, pkg/controller/replicaset, pkg/controller/statefulset, pkg/controller/util, pkg/controller/volume?
That can be a single cleanup PR because those components were already converted earlier.
@pohly, will be happy to work on those as well :)
@gurpreet-legend: I added you to the table.
It looks like pkg/controller/endpoint hasn't been touched at all yet. Then do that in one PR and the cleanup changes in another.
I am really interested in working on cmd/kube-proxy
component. Can someone please assign this to me?
I would like to work on this component: cmd/kube-apiserver
.Can someone assign this to me.
I would really like to work on this component cmd/kube-apiserver
.Can someone please assign this to me
@gurpreet-legend: I added you to the table.
It looks like pkg/controller/endpoint hasn't been touched at all yet. Then do that in one PR and the cleanup changes in another.
Sure, will work on that first and then will work on the other components :)
Hello @pohly
Can I convert staging/src/k8s.io/client-go/metadata
?
Perhaps I can do a couple more afterwards
Thank you
@ricardoapl: that seems like an easy package to get started with. Please go ahead.
Hi @pohly,
Can I work on migrating staging/src/k8s.io/client-go/transport
& staging/src/k8s.io/client-go/util
?
Thanks.
hi @pohly , can I work on staging/src/k8s.io/client-go/plugin οΌ
hi @pohly , can I work on staging/src/k8s.io/client-go/restmapper οΌ
I can migrate staging/src/k8s.io/client-go/discovery
next if that's OK
Hi @pohly, can I work on migrating staging/src/k8s.io/client-go/rest
, thanks
Hi @pohly Sry for the inconvenience. but I won't be able to follow these two PRs and have closed them, could you help with re-assign/release these two features? staging/src/k8s.io/apimachinery (https://github.com/kubernetes/kubernetes/pull/115317) staging/src/k8s.io/kubectl (https://github.com/kubernetes/kubernetes/pull/115087)
Please wait with submitting PRs. I need to more time to actually look at some of the packages before I can provide guidance on how to proceed.
Some design discussion of component cliant-go/rest
migration to contextual logging
Usually the client-go returns the rest.Request
and the rest.Result
for the caller to use. The creation of their instances is controlled by the client-go. Can we define the logger as a struct field? like:
type Request struct {
...
logger *klog.Logger
...
}
func (r *Request) Fun() {
r.log().Info(...)
}
// If the caller does not define it, a default one is returned
func (r *Request) log() klog.Logger {
if r.logger == nil {
return klog.Background().WithName("rest_request")
}
return *r.logger
}
The creation of the rest.Config
is controlled by the caller, we add a context parameter for some functions used to build rest.Config
, like:
func InClusterConfig(ctx context.Context) (*Config, error) {
logger := klog.FromContext(ctx).WithName("rest_config")
...
}
Consider compatibility:
func InClusterConfig() (*Config, error) {
return InClusterConfigWithContext(context.TODO())
}
func InClusterConfigWithContext(ctx context.Context) (*Config, error) {
logger := klog.FromContext(ctx).WithName("rest_config")
...
}
Should we consider compatibility?
Let me elaborate further... the problem with changing client-go or any other package under staging is that we cannot simply change an API. It breaks to much existing code. Instead, we have to extend the API in a backwards compatible way. Adding a klog.TODO
or some other TODO remark doesn't help because it doesn't solve the API problem.
But our work isn't done at that point. Adding a new API is pointless if it doesn't get used by the Kubernetes components that we maintain. Out-of-tree components may also want to know that they should switch to the new API. Adding a "Deprecated" remark is too strong, the existing APIs are fine.
What I came up with is //logcheck:context
as a special remark that tells logcheck to complain about an API, but only in code which cares about contextual logging. This isn't a solution in all cases, but at least when adding a WithContext
variant it works.
So whoever now starts converting some package first has to look at existing usage of an API and then figure out how to change the API and that code - this is not easy, so beware before signing up to do this!
Thanks for your guidance, I combed through structs and functions call relationships in rest packages.
I add a logger field to the Request{}
and Result{}
structs and use methods to control their behavior:
type Request struct {
...
// logger you can set it using the SetLoggerFromContext(..) method.
logger *logr.Logger
}
// SetLoggerWithContext retrieves a logger set by the caller and creates a new logger
// with the constant name for the Reqeust's field logger.
func (r *Request) SetLoggerFromContext(ctx context.Context) {
inst := klog.FromContext(ctx).WithName(loggerNameRequest)
r.logger = &inst
}
// log returns a not nil logger to be used by the methods. If the logger field is not defined,
// sets a default logger for it.
func (r *Request) log() logr.Logger {
if r.logger == nil {
def := klog.Background().WithName(loggerNameRequest)
r.logger = &def
}
return *r.logger
}
Request public methods can use r.log()
to write structured logging, and some private methods can use it to build contextual logging to call internal functions and tool methods.
Request{}
in urlbackoff.go, warnings.go and with_retry.go , so they can be converted to contextual logging.//logcheck:context
to these functions.//logcheck:context
too.I tried to revise my PR again, please check whether it is what you expected, thank you.
@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.
@WillardHu: This issue is not a good place to discuss API design aspects. Let's do that on Slack.
OK, thanks
/label lead-opted-in
Hello @pohly π, Enhancements team here.
Just checking in as we approach enhancements freeze on Friday, February 9th, 2024 at 02:00 UTC.
This enhancement is targeting for stage beta
for 1.30 (correct me, if otherwise)
Here's where this enhancement currently stands:
implementable
for latest-milestone: 1.30
. KEPs targeting stable
will need to be marked as implemented
after code PRs are merged and the feature gates are removed.For this KEP, we would just need to complete the following:
implementable
for latest-milestone: 1.30
.The status of this enhancement is marked as at risk for enhancement freeze
. Please keep the issue description up-to-date with appropriate stages as well. Thank you!
KEP PR for 1.30 got merged.
Hey @pohly - with all the requirements fulfilled this enhancement is now marked as tracked for the upcoming enhancements freeze π! Thanks for your hard work
Hello π, 1.30 Enhancements team here.
Unfortunately, this enhancement did not meet requirements for enhancements freeze.
I made an error, this question under scalability
is now required in the KEP: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template#can-enabling--using-this-feature-result-in-resource-exhaustion-of-some-node-resources-pids-sockets-inodes-etc
If you still wish to progress this enhancement in 1.30, please file an exception request. Thanks!
@tjons: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility.
/milestone clear
Hello @pohly π, 1.30 Docs Lead here.
Does this enhancement work planned for 1.30 require any new docs or modification to existing docs?
If so, please follows the steps here to open a PR against dev-1.30
branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thursday February 22nd 2024 18:00 PDT.
Also, take a look at Documenting for a release to get yourself familiarize with the docs requirement for the release. Thank you!
Hi @pohly, @shivanshu1333, and @serathius,
π from the v1.30 Communications Team! We'd love for you to opt in to write a feature blog about your enhancement!
We encourage blogs for features including, but not limited to: breaking changes, features and changes important to our users, and features that have been in progress for a long time and are graduating.
To opt in, you need to open a Feature Blog placeholder PR against the website repository. The placeholder PR deadline is 27th February, 2024.
Doc PR for 1.30 created and linked to in the description
Enhancement Description
One-line enhancement description (can be used as a release note): Contextual logging enables the caller of a function to control all aspects of logging (output formatting, verbosity, additional values and names).
Kubernetes Enhancement Proposal: https://github.com/kubernetes/enhancements/tree/master/keps/sig-instrumentation/3077-contextual-logging
Discussion Link: WG Structured Logging
Primary contact (assignee): @pohly
Responsible SIGs: SIG Instrumentation, WG Structured Logging
Enhancement target (which target equals to which milestone):
[x] Alpha
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):[ ] Beta
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update(s): https://github.com/kubernetes/website/pull/45288Current configuration
https://github.com/kubernetes/kubernetes/blob/master/hack/logcheck.conf
Status
The following table counts log calls that need to be converted. The numbers for contextual logging include those for structured logging.
At this point, controllers could get converted to contextual logging or one of the components that was already converted to structured logging. If you want to pick one, ping @pohly on the #wg-structured-logging Slack channel. See structured and contextual logging migration instructions for guidance.
Besides migrating log calls, we also might have to migrate from APIs which don't support contextual logging to APIs which do:
From 2022-10-27 ~= Kubernetes 1.26
The focus was on converting kube-controller-manager. Of 1944 unstructured and/or non-contexual logging calls in
pkg/controller
andcmd/kube-controller-manager
, 82% were converted to structured, contextual logging in Kubernetes 1.27.From 2023-03-17 = Kubernetes v1.27.0-beta.0
All of kube-controller-manager got converted.
From 2023-09-18 =~ Kubernetes v1.28
From 2023-11-20 =~ Kubernetes v1.29
Table created with: