cloudflare / helm-charts

https://developers.cloudflare.com
Apache License 2.0
83 stars 66 forks source link

Public hostnames aren't propagated automatically #41

Open bhvk0 opened 1 year ago

bhvk0 commented 1 year ago

Public hostnames aren't propagated automatically. In the example below you can see our configuration:

cloudflare:
  ingress:
    - hostname: prom-server.domain.com
      service: https://prometheus-server.monitoring:80

But these settings aren't applying automatically. It means that I still have to add public hostnames in Tunnel configuration manually. Are there any additional settings required? Or this functionality isn't working yet?

Screenshot 2023-07-05 at 18 16 40

matt-j-so commented 1 year ago

I'm also facing the same issue.

verenion commented 1 year ago

I've also hit this issue. There are now two helm-charts here, and no documentation regarding what the actual difference between them is

dsalaza4 commented 1 year ago

Are there any updates regarding this?

aug70 commented 1 year ago

I've also hit this issue. There are now two helm-charts here, and no documentation regarding what the actual difference between them is

@verenion cloudflare-tunnel-remote chart assumes that you created your own config (public hostnames) on cf site and reads your config from there. Other chart cloudflare-tunnel attempts to create this config for you via it's ingress values you provide. I'm saying attempts because this chart doesn't work. See issue #59

tomasodehnal commented 10 months ago

I was trying to achieve the same - deploy cloudflared on kubernetes and configure the host to service mapping on the K8s side. And obviously, I ran into the same issue.

To rule out any K8s specifics, I was experimenting with cloudflared on Linux directly. I found out, that setting the mapping in the config file is not enough. One first needs to manually create the public hostname under the tunnel, as described here https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/#5-start-routing-traffic. After creating the public record, cloudflared successfully routes traffic for matching hostname in the config file. This applies to K8s deployment as well.

To sum up:

It still makes sense to use the 'locally-managed tunnel' approach, as it can be handled (except the login step 2) in the command line:

  1. Download cloudflared.
  2. Login to your account - this step also creates a certificate that is needed for tunnel AND records management, so without that, the K8s deployment couldn't manage the hostnames anyway.
  3. Create the tunnel - this creates the credentials json which can be directly used to create the secret.
  4. Create the hostname under the tunnel.
  5. Now you can continue with the K8s side and use the json file from point 3 to create the secret.

One final note, when the tunnel is created as locally-managed, it has a note in the web dashboard. Looks like you need to pick one of the approaches and stick to it (makes sense). image

Hope this helps someone.

Syntax3rror404 commented 6 months ago

@tomasodehnal you can also do it via cli.

cloudflared tunnel route dns TUNNELNAME FQDN

and youre done. You can also iterate over the ingress rules via init container to automate this.