inlets / inlets-operator

Get public TCP LoadBalancers for local Kubernetes clusters
https://docs.inlets.dev/reference/inlets-operator
MIT License
1.35k stars 98 forks source link

AWS security group lacks inbound rule for custom TCP port #162

Closed fedenusy closed 1 year ago

fedenusy commented 1 year ago

Expected Behaviour

Given I have the operator running with provider: "ec2", When I create a LoadBalancer service with port: 27017, Then the EC2 instance's security group should include an inbound rule allowing traffic via port 27017.

Current Behaviour

The EC2 instance's security group only allows inbound traffic via ports 80, 443, and 8123.

Possible Solutions

Three I can think of:

  1. Automatic inbound rule creation-- see "Expected Behaviour".
  2. Configurable AWS inbound ports. For example, if I create an nginx-ingress controller with a TCP routing config, I wouldn't expect the operator to pick that up. In this case I'd want to annotate some k8s resource, e.g. the ingress controller's LoadBalancer service, and say which ports to open up for the EC2 instance.
  3. Insecure mode-- allow inbound traffic thru all ports (probably a bad idea).

Steps to Reproduce (for bugs)

  1. Install operator with ec2 provider
  2. Install mongodb bitnami chart
  3. Create LoadBalancer service with { port: 27017, targetPort: 27017 }
  4. Observe: EC2 instance's sg denies inbound traffic via port 27017

Context

I have MongoDB running in a Docker Desktop cluster. Want to make it accessible to hosted tools like Grafana, etc.

Your Environment

alexellis commented 1 year ago

Hi thanks for trying out inlets

As far as I knew, the additional ports were already being added as part of the security group configuration. You can check the code if you'd like to see it over at - https://github.com/inlets/cloud-provision/blob/master/provision/ec2.go#L250 - it may potentially need a tweak. Although if you want to use this sooner, you can of course edit the security group manually too.

Can you share the following output please?

Run kubectl get svc -wide in the namespace where the service exists

And also kubectl get svc/NAME -n NAMESPACE -o yaml

For example, if I create an nginx-ingress controller with a TCP routing config, I wouldn't expect the operator to pick that up.

Any LoadBalancer will be picked up - that's the design.

However, you can change this behaviour with the only annotated feature - then you just annotate the LoadBalancers that you want inlets-operator to cater to.

Alex

fedenusy commented 1 year ago

Output you asked for below, with some pieces redacted. Anytime you see XX.XX.XX.XX that's the EC2 instance's IP address.

kubectl get svc -o wide

NAME                                          TYPE           CLUSTER-IP       EXTERNAL-IP
PORT(S)                                      AGE     SELECTOR
ingress-nginx-bd37f03e-controller             LoadBalancer   10.103.54.90     XX.XX.XX.XX,XX.XX.XX.XX   80:32264/TCP,443:30611/TCP,27017:32396/TCP   33m     app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx-bd37f03e,app.kubernetes.io/name=ingress-nginx

kubectl get svc/NAME -n NAMESPACE -o yaml

apiVersion: v1
kind: Service
metadata:
  annotations:
    meta.helm.sh/release-name: ingress-nginx-bd37f03e
    meta.helm.sh/release-namespace: REDACTED
  creationTimestamp: "2023-05-02T21:57:50Z"
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx-bd37f03e
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.7.0
    helm.sh/chart: ingress-nginx-4.6.0
  name: ingress-nginx-bd37f03e-controller
  namespace: REDACTED
  resourceVersion: "1578992"
  uid: ffe92ee5-3815-43bf-b170-821e07adc684
spec:
  allocateLoadBalancerNodePorts: true
  clusterIP: 10.103.54.90
  clusterIPs:
  - 10.103.54.90
  externalIPs:
  -  XX.XX.XX.XX
  externalTrafficPolicy: Cluster
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    nodePort: 32264
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    nodePort: 30611
    port: 443
    protocol: TCP
    targetPort: https
  - name: 27017-tcp
    nodePort: 32396
    port: 27017
    protocol: TCP
    targetPort: 27017-tcp
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx-bd37f03e
    app.kubernetes.io/name: ingress-nginx
  sessionAffinity: None
  type: LoadBalancer
status:
  loadBalancer:
    ingress:
    - ip:  XX.XX.XX.XX
alexellis commented 1 year ago

Thanks for sharing this output.

So I had a look at the provisioning code again.

If we pass in an extra flag to the library it will open up the security group from 1024 to 65535. If there is nothing listening on these ports, it's probably not as "insecure" as you suggest.

Alternatively, we could update the library to take in a number of ports from the LB.

The reason for the wider range being available is that when using the inletsctl tool to create tunnel VMs outside of the operator, you don't know what ports the user will need, so all are available - and then as and when the client connects - the ports are opened on the server.

Both inletsctl and inlets-operator use the same library.

So we could either trigger the existing code to open up 1024 to 65535 - or we could do some additional work to pass in a list of ports, that is only used when called by inlets-operator

alexellis commented 1 year ago

This has been fixed in 0.17.1 - thanks for your feedback and for using inlets.