caddyserver / caddy

Fast and extensible multi-platform HTTP/1-2-3 web server with automatic HTTPS
https://caddyserver.com
Apache License 2.0
57.51k stars 4.01k forks source link

feature request: random number placeholder #6019

Closed elee1766 closed 9 months ago

elee1766 commented 9 months ago

currently, in order to load balance canary deploys, i do something like this

:3838 {
    @app_canary {
        path /app*
        expression "(int({time.now.unix_ms}) % 10) >= 8"
    }

    route /* {
        route @app_canary {
                respond "canary"
        }
        route /app* {
            respond "normal"
        }
    }
}

this would send ~20% of requests to the canary deployment

it would be nice if there was a {math.rand} placeholder, that would feel a little better. or maybe a env.RANDOM like some shells export

mohammed90 commented 9 months ago

I think you're having an XY problem, so your solution isn't optimal. You don't need a random number generator in the replacer. You need an upstream selection policy that respects proportioned selection.

elee1766 commented 9 months ago

I think you're having an XY problem, so your solution isn't optimal. You don't need a random number generator in the replacer. You need an upstream selection policy that respects proportioned selection.

upstream selection policy is for reverse_proxy though

i'm not reverse proxying, i'm going between different routes - see my example

the goal is to stop reverse proxying with one webserver per SPA, and to host all the different versions and SPA's from one app.

elee1766 commented 9 months ago

here's an example of the config in context.

i coulnd't find a way to apply an upstream selection policy to different routes - so this expression hack was the least intrusive way i could think of.

adding upstream selection policies to route matching seemed like a pretty heavy lift - this felt like a "good enough" solution. combined with the fact that date/time placeholders already exist, we could easily write pre-scheduled staged deploys through just caddy expressions. (for instance, activating a deploy to 100% when the date is above a preconfigured time)

{
    debug
    admin off

    filesystem analytics vfs "s3://nyc3.digitaloceanspaces.com/oku-artifacts/tags/info/v0.2.5.tar.gz" {
        header AWS_BUCKET_NAME "oku-artifacts"
        header AWS_ACCESS_KEY_ID {env.AWS_ACCESS_KEY_ID}
        header AWS_SECRET_ACCESS_KEY {env.AWS_SECRET_ACCESS_KEY}
    }
    filesystem trade vfs "s3://nyc3.digitaloceanspaces.com/oku-artifacts/tags/app/v0.2.0.tar.gz" {
        header AWS_BUCKET_NAME "oku-artifacts"
        header AWS_ACCESS_KEY_ID {env.AWS_ACCESS_KEY_ID}
        header AWS_SECRET_ACCESS_KEY {env.AWS_SECRET_ACCESS_KEY}
    }
    filesystem trade-canary vfs "s3://nyc3.digitaloceanspaces.com/oku-artifacts/tags/app/v0.2.2.tar.gz" {
        header AWS_BUCKET_NAME "oku-artifacts"
        header AWS_ACCESS_KEY_ID {env.AWS_ACCESS_KEY_ID}
        header AWS_SECRET_ACCESS_KEY {env.AWS_SECRET_ACCESS_KEY}
    }
    filesystem landing vfs "s3://nyc3.digitaloceanspaces.com/oku-artifacts/tags/landing/v0.0.63.zip" {
        header AWS_BUCKET_NAME "oku-artifacts"
        header AWS_ACCESS_KEY_ID {env.AWS_ACCESS_KEY_ID}
        header AWS_SECRET_ACCESS_KEY {env.AWS_SECRET_ACCESS_KEY}
    }

}

(spa) {
    fs {args[0]}
    prerender_io {$PRERENDER_TOKEN} {args[1]}
    redir {args[1]} {args[1]}/
    uri strip_prefix {args[1]}/
    try_files {path} /
    file_server
}

:3838 {
    @app_canary {
        path /*
        expression "(int({time.now.unix_ms}) % 10) >= 8"
    }
    route /* {
        import spa landing /
    }
    route /app* {
        route @app_canary {
            import spa trade-canary /app
        }
        route /* {
            import spa trade /app
        }
    }
    route /info* {
        import spa analytics /info
    }
}
mholt commented 9 months ago

I think the actual problem/request is:

send ~20% of requests to the canary deployment

or, since this sounds like proxying specifically, rephrased to be more accurate and generalized:

spread traffic across HTTP routes

As there are more ways to spread traffic than just randomizing, a random number placeholder as the proposed solution is not optimal.

I think the best solution would be a custom HTTP handler plugin that effectively applies the proxy's selection policies to configured HTTP routes instead. Or something like that; the logic from those modules would likely have to be removed into new modules. Who knows, maybe they generalize!

We don't lightly add new modules to the core distribution anymore unless they're generally useful and well-vetted, or sponsored with a good use case, but anyone is welcome to work on this feature as a Caddy plugin. :smiley: If it is generally useful then, we can always bring it in for everyone to use by default. It's just hard to remove things that get added and don't end up being commonly used.

I'd be happy to build something like this with a Business sponsorship or higher tier.

I'll close this, but feel free to continue discussion. :+1:

elee1766 commented 9 months ago

I think the actual problem/request is:

send ~20% of requests to the canary deployment

or, since this sounds like proxying specifically, rephrased to be more accurate and generalized:

spread traffic across HTTP routes

As there are more ways to spread traffic than just randomizing, a random number placeholder as the proposed solution is not optimal.

I think the best solution would be a custom HTTP handler plugin that effectively applies the proxy's selection policies to configured HTTP routes instead. Or something like that; the logic from those modules would likely have to be removed into new modules. Who knows, maybe they generalize!

We don't lightly add new modules to the core distribution anymore unless they're generally useful and well-vetted, or sponsored with a good use case, but anyone is welcome to work on this feature as a Caddy plugin. :smiley: If it is generally useful then, we can always bring it in for everyone to use by default. It's just hard to remove things that get added and don't end up being commonly used.

I'd be happy to build something like this with a Business sponsorship or higher tier.

I'll close this, but feel free to continue discussion. :+1:

yeah.. I wanted to avoid needing to develop and maintain a new module. to me that is much worse than a suboptimal solution.

we'll probably just continue using modulo time then - it's good enough.

mholt commented 9 months ago

yeah.. I wanted to avoid needing to develop and maintain a new module.

Well, if someone else doesn't build it then we'll have to, but we have quite a lot on our plate right now to take that on for free. But we'd happily work on it with a sponsorship.

to me that is much worse than a suboptimal solution.

I actually think a module to spread traffic across routes would be the proper solution to this, but your current solution works just fine for what you need to do. :+1:

Anyway, I really do think it's a good idea -- just time constrained atm. Thanks for requesting it!