Closed mac-chaffee closed 2 weeks ago
I wish there was more control over the API endpoint. I also wish it was exposed on the admin interface (default 2019), which offers remote administration feature with mTLS authentication and ACL (I discuss this capability here).
In its current form, I'd block access to the Souin API with a matcher on the main site, and use the cache handler on another site with mTLS enabled on it. Putting this together develops this Caddyfile:
{
cache {
ttl 120s
otter
api {
souin
}
}
}
example.com {
@souinApi path /souin-api*
handle @souinApi {
respond 405
}
handle {
cache
respond "Hello World!"
}
}
souin.example.com {
tls {
client_auth {
mode require_and_verify
# pick one from here: https://caddyserver.com/docs/caddyfile/directives/tls#trust-pool-providers
trust_pool <pool config>
}
}
cache
}
This config protects the Souin API with mTLS.
P.S.: Excuse the formatting. I'm typing on the phone.
Thanks for the example! With some modifications (needed the order cache after handle
directive), I was able to block access to the API, but if you expose the API in a different site block, those two APIs aren't actually sharing the same cache:
{
order cache after handle
cache {
ttl 120s
otter
api {
souin
}
}
}
localhost {
handle /souin-api/* {
respond 405
}
cache
respond 200
}
# Expose it on another port that could be firewalled-off, BUT this is actually a different API
localhost:3728 {
cache
respond 200
}
This shows that purging the cache doesn't actually work for the other site block:
$ curl -ik https://localhost/
...
cache-status: Souin; fwd=uri-miss; stored; key=GET-https-localhost-/
...
$ curl -ik https://localhost/
...
cache-status: Souin; hit; ttl=118; key=GET-https-localhost-/; detail=OTTER
...
$ curl -iX PURGE http://localhost:3728/souin-api/souin/flush
HTTP/1.1 204 No Content
...
# Observe the route is still cached:
$ curl https://localhost/
cache-status: Souin; hit; ttl=106; key=GET-https-localhost-/; detail=OTTER
Then I thought maybe we could do something like this so they share the same site block:
...
localhost {
handle /souin-api/* {
basic_auth {
testuser $2a$14$i1G0lil5qti7qahb4.Kte.wP/3O8uaStduzhBBtuDUZhMJeSjxbqm
}
cache
respond 200
}
cache
respond 200
}
But that has the same problem: it creates two different caches.
Maybe some Caddy experts can chime in on how to solve this directive ordering puzzle.
Ahh this seems to work! The directive ordering puzzle can be solved by using route
:
{
cache {
ttl 120s
otter
api {
souin
}
}
}
localhost {
route {
@souinApi path /souin-api/*
basic_auth @souinApi {
testuser $2a$14$i1G0lil5qti7qahb4.Kte.wP/3O8uaStduzhBBtuDUZhMJeSjxbqm
}
cache
respond "Hello World!"
}
}
Then the API can be accessed like this:
curl -ik -X PURGE -u testuser:password https://localhost/souin-api/souin/flush
Looks like it's currently impossible to move the API routes to a different port because the API handler and the cache handler are part of the same Golang function. But we can use things like basic_auth
, remote_ip
, forward_auth
, etc.
Hello! I was excited to learn this repo exists since it makes caching very simple!
I'm looking for ways to avoid exposing the Souin API to the internet so that not just anyone can purge the cache and possibly overload my website. By default, when you enable the API, it will be exposed as an additional route alongside the rest of the application, which I want to avoid:
After the deprecation of Souin's security endpoint which uses JWTs, sounds like we're supposed to protect the API by using Caddy itself, but I'm not sure how to do that.
Ideally I'd like the API to be reachable only on localhost or maybe on a different port number. I see that Caddy supports matchers that could be used to protect normal routes, but since the
cache.api
config has to be global, I'm not sure how to expose it in a dedicated site block.Does anyone from the community have an example of how to do this? Thanks in advance!