projectcontour / contour-operator

Experimental repository to explore an operator for deploying Contour
Apache License 2.0
43 stars 34 forks source link

Contour operator using Kind doesn't expose HTTPProxy to the Host (Docker Desktop Mac) #406

Open gaelleacas opened 3 years ago

gaelleacas commented 3 years ago

Hi 😃

First of all I would like to say congratz to the Contour team for all the amazing work done on Contour Project 👏 🚀

So, I'm trying Contour using Kind on Mac OS & I'm a little bit lost ([This issue helped me] (https://github.com/projectcontour/contour-operator/issues/191))

What steps did you take and what happened:

I created a new Kind Cluster with this config (expose ports 80/443):

 kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
  image: kindest/node:v1.20.7
  kubeadmConfigPatches:
  - |
    kind: InitConfiguration
    nodeRegistration:
      kubeletExtraArgs:
        node-labels: "ingress-ready=true"
  extraPortMappings:
  - containerPort: 80
    hostPort: 80
    protocol: TCP
  - containerPort: 443
    hostPort: 443
    protocol: TCP
- role: worker
  image: kindest/node:v1.20.7

I installed Contour following the documentation & applying this steps:

// Installing operator
kubectl apply -f https://raw.githubusercontent.com/projectcontour/contour-operator/release-1.17/examples/operator/operator.yaml

// Contour CRD (Having Envoy pod in the node exposing to host)
cat <<EOF | kubectl apply -f -
apiVersion: operator.projectcontour.io/v1alpha1
kind: Contour
metadata:
  name: contour-sample
spec:
  nodePlacement:
    envoy:
      nodeSelector:
        ingress-ready: 'true'
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Equal
          effect: NoSchedule
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: web-app
  name: web-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  name: web-app-svc
  labels:
    app: web-app
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  type: ClusterIP
  selector:
    app: web-app
---
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata: 
  labels:
    app: web-app
  name: web-app
spec: 
  virtualhost:
    fqdn: foo.bar.com
  routes: 
    - conditions:
      - prefix: /
      services:
        - name: web-app-svc
          port: 80

After setting foo.bar.com => localhost in my /etc/hosts When I run curl, my app is not reachable

▶ curl http://foo.bar.com
curl: (52) Empty reply from server

What did you expect to happen:

curl http://foo.bar.com

Should return a code 200 with Welcome to nginx! html content

Anything else you would like to add:

It works very well using the Contour installation (at the step 2) doing the same steps:

 kubectl apply -f https://projectcontour.io/quickstart/v1.17.0/contour.yaml

 // patching envoy deamonset : 
kubectl patch daemonsets -n projectcontour envoy -p '{"spec":{"template":{"spec":{"nodeSelector":{"ingress-ready":"true"},"tolerations":[{"key":"node-role.kubernetes.io/master","operator":"Equal","effect":"NoSchedule"}]}}}}'

I found a difference between both installation:

Contour expose pod with hostPort

 // daemonset Envoy: 
 [...]
name: envoy
ports:
- containerPort: 8080
  hostPort: 80 <---
  name: http
  protocol: TCP
- containerPort: 8443
  hostPort: 443 <---
  name: https
  protocol: TCP
  [...]

Contour Operator doesn't implement it, so we cannot add it through contour CRD Is this the desired behaviour?

I ask because I see in K8S doc

Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.

If you only need access to the port for debugging purposes, you can use the apiserver proxy or kubectl port-forward.

I tried to test with metallb as well but it's complex on MacOS 😫. Could be nice if we can set it in Contour CRD when we need this configuration :)

Environment:

skriss commented 2 years ago

I believe we don't expose the host port option through the Operator because if you had >1 Contour, the host ports would conflict between them, given that Envoy is deployed as a daemon set.

It might be nice to support host ports as an option, particularly for folks who are only creating 1 Contour instance, but we'd have to figure out the UX if there were >1 Contours.

The other option here is to use specific NodePort values for the Envoy service (https://github.com/projectcontour/contour-operator/blob/main/api/v1alpha1/contour_types.go#L282-L302), and change your KinD config to map those pre-determined nodeport values to the host (mac) on ports 80/443. Similar idea, just uses high port values instead of 80/443 in the cluster.