Open rythm-of-the-red-man opened 2 weeks ago
Well, answer came faster than I tought. This issue was helpful. Since Azure Cache for Redis is not liberal one and healthcheck is off by default (and, I have to admit, poorly described in docs). If you don't mind I'll open PR with amends in docs that will better communicate that you actually can pass additional kwargs to redis-py clinet. In my opinion it might be helpful for users with managed redis instances.
Example of valid config:
CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.core.RedisChannelLayer",
"CONFIG": {
"hosts": [
{
"address": CHANNELS_REDIS_URL,
"retry_on_timeout": True,
"health_check_interval": 1,
"socket_keepalive": True,
}
],
"capacity": 1500,
"expiry": 5,
},
},
}
Stack
all hosted on azure kubernetes service after ingress-nginx and load balancer.
Traceback
Description
This issue keep happening on prod or test env (more or less cloned prod) when we couble
channels
withredis
.I suspect that managed redis instance timeout idle connection and channels_redis do not attempt to re-connect (the idea might be dumb tho, if so, I'm sorry I don't know much about internals of
channels_redis
andredis
in general). I think it might be a case because the scheme kinda looks as follows: Issue occures when I turn on client app, wait ~10 minutes then try to do any action related to channels like re-establish websocket connection by refreshing page.I assumed that it might be
channels_redis
bug that's why I wrote about it here. I'd love any feedback, thanks in advance.Strange part
Well that's kinda odd but since it happened I decided to include it here. when I run daphne instance in dockerfile like this:
the issue appears, but if I run another server after connecting to working pod like
and i connect to the 2nd one the issue doesn't seem to appear (or I didn't managed to catch it).