Closed jefflill closed 5 years ago
I've done some experimentation with the libvmod_dynamic VMOD and I believe I can get it to work with one caveat: neon-proxy-cache instances will be able to support:
We will not support multiple backends with hostnames and we also won't support multiple backends that mix IP addresses and hostnames.
The fundamental problem here is that the libvmod-dynamic requires that the backend for a specific origin hostname to be selected in the vcl_recv
subroutine. It is not possible fetch the backends and add them to a director during initialization. Varnish Plus goto has an additional method that returns a director for an origin server (using its hostname) which could then be added to a round-robin or other director. But the free VMOD can't do that. I looked into some workarounds, but VCL is not expressive enough and I don't want to include custom C code.
The primary neonHIVE scenario is to provide caching for a single service via its name anyway, so these restrictions are pretty reasonable for now.
Here's what I need to do:
vcl_recv
subroutine.This all works now. I'm going to leave this issue open though and put it on the backlog.
It would be really nice the relax the single backend with hostname restrictions in the future. This will probably require adding a .director(hostname)
method to the libvmod-dynamic VMOD. This will take a much better understanding about how VMODs work, but I can probably lift some code from some other VMODs.
I ran into this while testing Varnish against a test vegomatic service.
The basic problem here is that the default Varnish directors will resolve backend host names just once when the VCL is compiled and use the resolved IP address until the VCL is reloaded. This causes at least two problems for this scenario:
If you set a load balancer rule with caching enabled before the target Docker service is started, then the service name resolution will fail resulting in a VCL compilation failure. This is really bad because the VCL compilation will continue failing until the service is started and its hostname can be resolved and the neon-proxy-cache instance receives a fail-safe broadcast message (which may be 60+ seconds later).
If you have a service running and being cached correctly and you stop the service and restart it with the same name, Docker will assign a new virtual IP address to the service but Varnish will continue using the old address until the VCL is reloaded. The service will essentially be down during this time.
I did some research and of course Varnish Plus has a solution: goto. We don't want to go there and require our users to pay $12K for Varnish Plus.
So I found a reasonable looking alternative: libvmod-dynamic. The idea is to include this in our custom Varnish build and have neon-proxy-manager use this instead of the standard round-robin director.
Here's a general description of this problem:
https://info.varnish-software.com/blog/varnish-backends-in-the-cloud