Open HannesBBR opened 5 months ago
Could you help me to try this way and let me know does this work for you? (if it's not work, can you help to paste some controller error logs?) Thank you so much! Maybe you can create one HTTPRoute and one GRPCRoute:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: test-application
spec:
hostnames:
- test-app.test.com
parentRefs:
- name: test-service-network
sectionName: rest
namespace: aws-application-networking-system
rules:
- backendRefs:
- name: testapp-service-rest
kind: Service
port: 80
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: GRPCRoute
metadata:
name: test-application
spec:
hostnames:
- test-app.test.com
parentRefs:
- name: test-service-network
sectionName: grpc
namespace: aws-application-networking-system
rules:
- backendRefs:
- name: test-app-service-grpc
kind: Service
port: 50051
Hi, thanks for your response! I forgot to mention that I also tried this option already indeed, but what then seems to happen is:
When describing the HTTPRoute
resource the message says :
Message: error during service synthesis failed ServiceManager.Upsert test-app-test-app due to service test-app/test-app had a conflict: Found existing resource with conflicting service name: arn:aws:vpc-lattice:eu-central-1:123456789:service/svc-12345789`
So it looks like the controller tries to create a new Lattice service for the HTTPRoute
resource, and stops because there already is a service with that name. Ideally for my use case, it would then rather acknowledge that a service for that name already exists and then adds a new listener to that service based on the backendRef
and parentRef
that are configured in the HTTPRoute
. Hard though to assess what kind of impact such a change would have in the whole flow.
Hi @HannesBBR , thanks for your reply. The current controller version don't support your use case to translate k8s resource into 2 listeners in a same lattice Service and 2 listeners route traffic to different target groups. However, the vpc lattice itself did support that set up, for example:
And for your suggest: parent: grpc --> some kind of ref to a parent above
I think it's not in the gateway api spec HTTPRoute, and seems gateway api HTTPRoute don't have a proper way to represent it.
To immediately unblock your use case, probably you can try this workaround: Use K8s ServiceExport and k8s TargetGroupPolicy only. Don't create any k8s Gateway, k8s HTTPRoute, k8s ServiceImport. Don't use the controller to manage the VPC Lattice service. Instead, manage the VPC Lattice service outside of the k8s. i.e., Use the aws console, CloudFormation, Terraform to create the VPC Lattice service network and service.
For example, following this steps:
Make sure the aws gateway api controller, testapp-service-rest
and testapp-service-grpc
running in your k8s clusters.
Create K8s ServiceExport and k8s TargetGroupPolicy:
apiVersion: application-networking.k8s.aws/v1alpha1
kind: ServiceExport
metadata:
name: testapp-service-rest
annotations:
application-networking.k8s.aws/federation: "amazon-vpc-lattice"
application-networking.k8s.aws/port: "80"
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: tg-policy-for-testapp-service-rest (this TargetGroupPolicy is optional if you just use protocol: HTTP and protocolVersion: HTTP1)
spec:
targetRef:
group: application-networking.k8s.aws
kind: ServiceExport
name: testapp-service-rest
protocol: HTTP
protocolVersion: HTTP1
apiVersion: application-networking.k8s.aws/v1alpha1
kind: ServiceExport
metadata:
name: testapp-service-grpc
annotations:
application-networking.k8s.aws/federation: "amazon-vpc-lattice"
application-networking.k8s.aws/port: "50051"
apiVersion: application-networking.k8s.aws/v1alpha1
kind: TargetGroupPolicy
metadata:
name: tg-policy-for-testapp-service-grpc
spec:
targetRef:
group: application-networking.k8s.aws
kind: ServiceExport
name: testapp-service-grpc
protocol: HTTP
protocolVersion: GRPC
k8s-<namespace>-testapp-service-http-<randomstring>
and k8s-<namespace>-testapp-service-grpc-<randomstring>
have been createdk8s-<namespace>-testapp-service-http-<randomstring>
Target Groupk8s-<namespace>-testapp-service-grpc-<randomstring>
Target GroupAnd we are open to discuss how to represent this lattice setup in the k8s resource in long term.
(My personal suggestion is we do that way https://github.com/aws/aws-application-networking-k8s/issues/644#issuecomment-2138176573 but fix the controller issue: Found existing resource with conflicting service name
)
Thanks a lot for the example, I didn't know only having the ServiceExport
(and no e.g. HTTPRoute
) would still create the target groups, but that seems to be the case! So deploying your suggestion indeeds creates the two target groups, which can then be used as targets for a service and listeners created in e.g. CDK 🙇
For reference, one thing to keep in mind is to not yet apply the 'application-networking.k8s.aws/pod-readiness-gate-inject': 'enabled'
label to the namespace, as there will be a point where you already create target groups (through the ServiceExport
and TargetGroupPolicy
resources), but where they are not yet used by a service listener (since that needs to be created async in the second step). That causes the readiness gate to be 'False', making the pods temporarily unavailable, until the service is created and linked to the target groups. So best to first fully onboard the application, and only afterwards set the readiness gate label on the namespace.
That causes the readiness gate to be 'False', making the pods temporarily unavailable
The controller did set application-networking.k8s.aws/pod-readiness-gate==false
for UNUSED status lattice targets.
So best to first fully onboard the application, and only afterwards set the readiness gate label on the namespace.
Yes, you need to use this way to setup your resource. I think this user experience is fine? but change to:
case vpclattice.TargetStatusUnused:
newCond.Status = corev1.ConditionTrue
newCond.Reason = ReadinessReasonUnused
also make sense to me. https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-readiness-status
Yeah I think it's fine like this as well, just wanted to mention it for others, as it might not be obvious. Thanks again for your help!
Hi,
One of our applications running in k8s accepts both GRPC and HTTP traffic (its pods have different containers with a different port). To handle traffic for this application, we have 2 k8s services:
service-rest
=> forwards traffic to the port of the rest-containerservice-grpc
=> forwards traffic to the port of the grpc-containerOur current ingress into this k8s application is an ALB with two listeners that forward traffic to the respective k8s service, based on the port:
443
listener => forwards to target grouptg-rest
, which sends traffic to theservice-rest
in k8s (targets are managed by the AWS load-balancer controller)50051
listener => forwards to target grouptg-grpc
, which sends traffic to theservice-grpc
in k8s (targets are managed by the AWS load-balancer controller)The domain name used by the clients is the same for both REST and GRPC traffic, and they choose the respective ALB listener port depending on whether they want to talk REST or GRPC.
As we are onboarding services into VPC Lattice, we'd now like the achieve the same kind of setup with VPC Lattice for this application:
443
listener => forward all traffic on this port to target groupvpc-lattice-tg-rest
50051
listener => forward all traffic on this port to target groupvpc-lattice-tg-grpc
I have tried some things to achieve this setup using the Gateway API controller, but I didn't find a way to do this:
Two different
HTTPRoute
resources with the same name, but a differentsectionName
in theparentRefs
property and a differentbackendRef
service:=> this creates a single VPC Lattice service with one listener and two target groups. However it causes the controller to periodically flipflop/overwrite the (single) listener of the service between the two ports related to the two
sectionNames
, instead of adding a second listener to the service that would then forward traffic to its respective targetgroup.A single
HTTPRoute
resource, with twoparentRefs
and twobackendRefs
:=> the result is a single VPC Lattice service with two listeners, and two target groups. However, the rules in each of the two listeners basically split traffic evenly between the two created target groups, while we'd like to have all traffic for a listener be forwarded to the 'correct' target group only.
Ideally, I would hope something like this would be possible, where you explicitly define which backendRef should be used by which listener/parent:
But perhaps there already is another way of achieving the situation described above with the existing controller? Many thanks!