Closed jayunit100 closed 1 year ago
i.e. "services should not source NAT unnecessarily" - mc
We should be able to test this with an extra "remote" container
@astoycos yes, I was thinking of a daemonset returning its nodeName ie over HTTP
@mcluseau I can take a look at this :) we do something similar in ovn-kubernetes
/assign
Opening this up for folks
/unassign
I can still help here though please reach out if you need help!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Hi @astoycos , can you help/guide me on with this issue? Where should I start from?
Hey @emabota That'd be great, thanks for the help
I think what we can do is
For the backend pod using https://pkg.go.dev/k8s.io/kubernetes/test/images/agnhost/netexec often is super helpful here since it has a /hostname
endpoint
/assign
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Looking at https://github.com/kubernetes-sigs/kpng/pull/287/
possibly add new tests upstream (or see if theyre there already)
Talking to @mcluseau , he's trying to support the ability of the kpng proxy to make sure that it only uses local endpoints when "clusterTrafficPolicy=local".
to test this, we can make a service w/ externalIP or NodePort to pull a servieEndpoint=local, where EACH ENDPOINT serves its nodename ... and you NEVER get more then one value. This means theres zero node bounce forwarding occuring
similar to https://github.com/kubernetes/kubernetes/pull/110967 , we might need a new upstream e2e not sure yet
lets also ask the question, generally to upstream k8s sig
( DONT DO THIS but, its just a note, theres another way test this..... ( setup a cluster where nodes cant forward traffic somehow , and see connection failures when using ) )