Closed tosi3k closed 1 month ago
Welcome @tosi3k!
It looks like this is your first PR to kubernetes-sigs/apiserver-network-proxy 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.
You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.
You can also check if kubernetes-sigs/apiserver-network-proxy has its own contribution guidelines.
You may want to refer to our testing guide if you run into trouble with your tests not passing.
If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!
Thank you, and welcome to Kubernetes. :smiley:
Hi @tosi3k. Thanks for your PR.
I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test
on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test
label.
I understand the commands that are listed here.
/assign @cheftako
/ok-to-test
The only failure is with lint, and this PR is not likely the culprit:
level=info msg="[runner] linters took 15.706254297s with stages: goanalysis_metalinter: 15.075948509s"
cmd/agent/app/server.go:104:2: undefined: klog (typecheck)
klog.V(1).Infoln("Shutting down agent.")
^
cmd/agent/app/server.go:126:2: undefined: klog (typecheck)
klog.V(2).InfoS("Received first signal", "signal", s)
^
cmd/agent/app/server.go:129:2: undefined: klog (typecheck)
klog.V(2).InfoS("Received second signal", "signal", s)
^
pkg/server/backend_manager.go:64:1: missing return (typecheck)
}
^
I like the change. I have concern that there might be edge cases where this might cause problems. I do not know of any but wonder for instance if we have adequately tested this with the system running in HTTP-CONNECT mode rather than GRPC. I think it would be good to at least have a flag to allow someone to revert to the legacy behavior if they do suspect a problem.
I like the change. I have concern that there might be edge cases where this might cause problems. I do not know of any but wonder for instance if we have adequately tested this with the system running in HTTP-CONNECT mode rather than GRPC. I think it would be good to at least have a flag to allow someone to revert to the legacy behavior if they do suspect a problem.
+1 to a flag, but the default should be protobuf (I don't see much risk here).
Thanks - added appropriate flag to both the agent and the server with defaulting to protobuf.
PTAL :)
@tosi3k: The following test failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
pull-apiserver-network-proxy-make-lint-master | e68f77bad89ca47e79366f6c651fef28905aa2cb | link | true | /test pull-apiserver-network-proxy-make-lint-master |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
/lgtm /approve
(The lint error should hopefully be resolved soon, and this PR will become merge-able.)
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: jkh52, tosi3k
The full list of commands accepted by this bot can be found here.
The pull request process is described here
@jkh52 given it's an unrelated linter failure, maybe we can force submit the PR? :)
Although I suppose it could wait for Monday given Friday is the worst day for releasing / submitting anything, especially in controversial ways like force submitting.
@cheftako @jkh52 friendly ping :)
For core K8s API objects like Pods, Nodes, etc., we can use protobuf encoding which, compared to the default JSON encoding, reduces CPU consumption related to (de)serialization, reduces overall latency of the API calls, reduces memory footprint and the work performed by the GC and results in quicker propagation of objects to event handlers of shared informers.
Core system components of K8s default their serialization method to protobuf for 8 years already: https://github.com/kubernetes/kubernetes/pull/25738.
Some benchmarks comparing JSON vs. protobuf showcasing how the latter data format (de)serializes faster and uses less memory: