Open Manikkumar1988 opened 3 months ago
Hi there! Varnish Helm Chart currently doesn't do anything special with regarding networking and purging. In the Enterprise version, we have varnish-broadcaster and varnish-discovery for this purpose, but for the open-source version you have to manually do this for every instance.
Right now, the only way to iterate the list of all available Varnish instances is to set up Varnish Helm Chart in a headless ClusterIP mode, e.g.:
server:
service:
type: ClusterIP
clusterIP: "None"
This allows service-name.namespace.svc.cluster.local
to resolve an A record to each individual Varnish instance. Then, on the component that's sending purges, you can make sure it sends PURGE to all A records returned by the cluster's DNS.
Alternatively, there's something like varnish-towncrier that might work (although I have not tried it myself). This could be set up through server.extraContainers
. Since towncrier doesn't rely on Varnish VSM, it should be pretty straightforward to set up:
server:
extraContainers:
- name: towncrier
image: ghcr.io/emgag/varnish-towncrier:latest
args:
- listen
env:
- name: VT_REDIS_URI
value: redis://redis-service
# Match this with `server.http.port`
- name: VT_ENDPOINT_URI
value: http://127.0.0.1:6081/
# the rest of the configuration
I'm going to ping @gquintard and @ThijsFeryn if they have a better idea :-)
I wanted to use the official Varnish Helm chart to deploy a Varnish cluster with HPA enabled.
When I send a PURGE request, Can you explain how purging works in this setup and provide recommendations for ensuring consistent purging across all replicas?
Wanted to ensure cache invalidated across all replicas & get the acknowledgment for the same.