Open bhvk0 opened 1 year ago
I'm also facing the same issue.
I've also hit this issue. There are now two helm-charts here, and no documentation regarding what the actual difference between them is
Are there any updates regarding this?
I've also hit this issue. There are now two helm-charts here, and no documentation regarding what the actual difference between them is
@verenion cloudflare-tunnel-remote
chart assumes that you created your own config (public hostnames) on cf site and reads your config from there.
Other chart cloudflare-tunnel
attempts to create this config for you via it's ingress
values you provide. I'm saying attempts because this chart doesn't work. See issue #59
I was trying to achieve the same - deploy cloudflared
on kubernetes and configure the host to service mapping on the K8s side. And obviously, I ran into the same issue.
To rule out any K8s specifics, I was experimenting with cloudflared
on Linux directly. I found out, that setting the mapping in the config file is not enough. One first needs to manually create the public hostname under the tunnel, as described here https://developers.cloudflare.com/cloudflare-one/connections/connect-networks/get-started/create-local-tunnel/#5-start-routing-traffic. After creating the public record, cloudflared
successfully routes traffic for matching hostname in the config file. This applies to K8s deployment as well.
To sum up:
cloudflared
uses the config file to route traffic to specified services. It DOES NOT handle the hostname creation under the tunnel based on the config file.cloudflared tunnel route dns <tunnel name/id> <hostname>
It still makes sense to use the 'locally-managed tunnel' approach, as it can be handled (except the login step 2) in the command line:
cloudflared
.One final note, when the tunnel is created as locally-managed, it has a note in the web dashboard. Looks like you need to pick one of the approaches and stick to it (makes sense).
Hope this helps someone.
@tomasodehnal you can also do it via cli.
cloudflared tunnel route dns TUNNELNAME FQDN
and youre done. You can also iterate over the ingress rules via init container to automate this.
Public hostnames aren't propagated automatically. In the example below you can see our configuration:
But these settings aren't applying automatically. It means that I still have to add public hostnames in Tunnel configuration manually. Are there any additional settings required? Or this functionality isn't working yet?