Open Oloremo opened 3 years ago
The sidecar_task
configuration lets you specify the Envoy task in detail, including its configuration file. Check out the default envoy configuration docs as a starting point.
I tried to find any examples without success. From my understanding I have to use an "escape hatches" to achieve this, so something like that:
service {
name = "lb-client"
port = "main"
connect {
sidecar_service {
proxy {
config {
envoy_local_cluster_json = <<EOF
{
"lb_policy": "RANDOM"
}
EOF
}
upstreams {
destination_name = "lb-server"
local_bind_port = 10000
}
}
}
}
}
It doesn't work for me yet, seems like I have to define a full envoy cluster configuration here but all I want to change is a lb_policy.
Could you give an example or hint here? Am I on right path at least?
Ah, sorry you're right. I'd forgotten we have that proxy.config
option. That lb_policy
isn't one of the configuration values that Consul has a value for, so you need to use that "escape hatch" of envoy_local_cluster_json
:
Specifies a complete Envoy cluster to be delivered in place of the local application cluster. This allows customization of timeouts, rate limits, load balancing strategy etc.
So unfortunately it looks like you need to pass it the whole darn JSON config as described in the Envoy docs. You might be able to extract the full config from a running Envoy sidecar (via nomad alloc exec
or docker exec
to the sidecar task) and then modify it.
This seems like the sort of thing that lots of folks will want to configure. It might be a good idea to open an issue in Consul to see if this policy could be added to the proxy config, and then you could just pass it in the Nomad proxy.config
setting (Nomad just passes that as an opaque blob to Consul, so there'd be nothing that Nomad would need to implement).
Created a Consul issue, I hope I made sense in it.
Hi @Oloremo, Consul 1.9 added support for configuring Envoy's lb_policy using the LoadBalancer parameter on a service resolver.
You just need to create a service resolver configuration entry for your service in Consul in order for it to be used when deploying your application in Nomad.
# backend-service-resolver.hcl
Kind = "service-resolver"
Name = "backend"
LoadBalancer = {
Policy = "random"
}
Then write that configuration to Consul using consul config write
.
consul config write backend-service-resolver.hcl
The proxies within the service mesh will use the specified load balancer policy when connecting to the backend
service.
A step-by-step tutorial for configuring various load balancer policies can be found at https://learn.hashicorp.com/tutorials/consul/load-balancing-envoy.
This kinda reminds me of https://github.com/hashicorp/nomad/issues/8647#issuecomment-691279667 -- I think it would be great to set those things as part of a nomad job and let nomad figure out the rest instead of having to manually create those entries in consul. FWIW this might be less of an issue if the Name
for consul config entries supported wildcards, so you could precreate those in a more dynamic way.
Ok, looks like we have a workaround for this, but also it would be nice to have a more "Nomad native" configuration setup. Going to mark this as a feature request and move it to the roadmap for further discussion.
Seems like it's possible in Consul Connect via service-resolver
How can I configure it for Nomad jobs with Consul Connect?
In our tests we receive very uneven connection results using the default Round Robin policy: