Open mark64 opened 9 months ago
I think we'll need to try and reproduce to figure out exactly what the issue is.
When you change the url to this:
spicedb.spicedb:50053
you are switching to the default grpc resolver that uses dns
to resolve the names, instead of kuberesolver which queries the endpoints
in kube directly. This means you're likely to see dropped traffic during pod reschedules and cluster upgrades.
If you still have the cluster handy in the old configuration, we can try enabling debug logs?
@mark64 You can set sidecar.istio.io/inject: "false"
for spicedb application.
For example you can see this example - link
@batazor I appreciate the suggestion. However, in my case I do want istio enabled. It not only provides mTLS to my workloads, it also lets me implement authorization and access policies.
@ecordell thanks for the explanation. This isn't too high priority for me but I'll let you know when I get a chance to try again with debug logs.
I'm working on running spicedb inside an istio-enabled namespace with mTLS in STRICT mode.
I noticed that when I enabled istio, 1 of the 2 spicedb pods would start up correctly and reach the READY state, while the other pod would fail to connect to the dispatch service with a TRANSIENT_FAILURE health check code.
When I changed the URL for the dispatch server from: https://github.com/authzed/spicedb-operator/blob/80bfd882f2dd81f7e9e3d2dd4e36f7a28449f445/pkg/config/config.go#L453 to:
spicedb.spicedb:50053
(service name.namespace name:dispatch port) using apatches
override, both pods were able to startup successfully.I'm a little new to the kubernetes and spicedb world so I'm wondering: