Closed danielloader closed 1 month ago
Hello @danielloader do you have some debug logs? The value you're seeing is a protobuf encoded value FYI so that's why it's ugly. Did you try to reset the storage?
Hello, I did enable debug log level on the cache middleware and don't actually get any additional log lines related as far as I can tell. I'll re enable it to be sure later.
I've tried deleting individual keys that seem to be poisoning the cache, setting a maximum key size (2MB) and I've done a full delete of the redis pod and pv to get a fresh cache persistence layer.
Anything else I could try?
@danielloader what's your config?
{
auto_https off
order cache before rewrite
cache {
allowed_http_verbs GET HEAD
log_level info
max_cacheable_body_bytes 2097152
api {
prometheus
basepath /_admin
}
mode bypass_request
ttl 86400s
redis {
configuration {
Addrs {$REDIS_HOST}:{$REDIS_PORT}
Password {$REDIS_PASSWORD}
}
}
}
log {
output stdout
format json
level {$LOG_LEVEL}
}
}
# Health checks
:80 {
respond /healthz 200
}
# Prometheus
:9090 {
metrics /metrics
}
# Common options for vendors
(common) {
log
}
:80 {
import common
cache {
cache_name cache
ttl 31536000s # 1 year redis side
default_cache_control "max-age=86400" # 1 day browser side
}
reverse_proxy {
to https://example.com:443
}
}
Hoping the slightly simplified redacted config is useful, the actual reverse proxy gets headers injected for auth at this point and points to secure service.
It's just interesting that the redis cache insertion claims to have failed yet the key with the correct TTL exists - though I have no concept of the validity of the value and I guess the response proto: cannot parse invalid wire-format data
throws some doubt on that.
I wonder if that could be related to cloudflare. Can you try to request directly your server without going through cloudflare?
The cannot parse invalid wire-format data
is thrown because the writer doesn’t write as many bytes as the reader reads bytes.
I assume that you used the github.com/darkweak/storages/redis/caddy
module right?
I've tried go-redis and redis modules just to rule out any client implementations, I'll try direct as you suggest by port forwarding to the caddy instance directly and hitting it up with a curl. That said the upstream I am reverse proxying is hosted by cloudflare itself.
FROM caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/caddyserver/cache-handler@v0.14.0 \
# --with github.com/darkweak/storages/redis/caddy@v0.0.8 \
--with github.com/darkweak/storages/go-redis/caddy@v0.0.8
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY ./Caddyfile /etc/caddy/Caddyfile
Edit: Done the testing with port forwarding and spoofing the host header to get it proxied and cached - same behaviour for me as if I were going through it on our own cloudflare proxy route - as mentioned though the upstream service I am trying to cache is using cloudflare on their edge, but a vast majority of the keys do cache, just some don't and I can't see anything in the responses that would mark them as different, we're talking json with a few dozen kb variation between each call.
@danielloader did you try with the latest release? (github.com/darkweak/souin/plugins/caddy@v1.7.2
) first without redis and trying to add redis if that's working?
I'll give it a shot tomorrow and get back to you.
@darkweak
Given the dockerfile:
FROM caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/caddyserver/cache-handler@v0.14.0 \
--with github.com/darkweak/storages/go-redis/caddy@v0.0.8
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY ./Caddyfile /etc/caddy/Caddyfile
What would I be changing here? Not sure if I need to drop the cache-handler plugin to add the one you suggested or something else?
Edit: Seems to have built but not got a change in outcome on the keys that refused to cache/read from them after as before as far as I can see
FROM caddy:2-builder AS builder
ARG TARGETOS TARGETARCH
RUN --mount=type=cache,sharing=private,id=${TARGETARCH}/go/pkg,target=/go/pkg \
xcaddy build \
--with github.com/darkweak/souin/plugins/caddy@v1.7.2 \
--with github.com/darkweak/storages/go-redis/caddy@v0.0.10
FROM caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
COPY ./Caddyfile /etc/caddy/Caddyfile
Okay good news, after some testing it seems to behave now. So that's good news!
The only issue I'm seeing is Vary (for Accept-Encoding) is not being included into the cache keys, it's a separate issue for sure but I don't know how to make sure the Accept-Encoding drift is covered in the cache so gzip responses aren't being issued to clients issuing Accept-Encoding: br
requests. Little more complex when you're caching in a reverse_proxy context than issuing up your own responses here.
edit:
added header_down Vary Accept-Encoding
to the reverse_proxy directive and it's all happy now.
Facing an interesting issue where most of the requests I make are caching as expected, but one isn't, and I can't deduce why.
Which yields the expected header in the response of:
But where do I go about debugging this further? I am using the caddy middleware approach here.
Interestingly the key is visible in Redis, with what looks to be the correct payload - hard to tell with all the non unicode characters in the value field but it looks similar to the expected json stored in the key.
Noticed that keys that work look like this:
Where as this misbehaving key has stored the header in the key as follows:
I've tried various Accept-Encoding headers wondering if it's due to compression but ultimately can't work it out - any tips where to go next would be great!