Open sylvain-de-fuster opened 1 year ago
@sylvain-de-fuster | Checked redis configuration (~20 clients connected for 10K limit).
how are you checking that exactly, ie. was this a redis-cli command? Also we have redis containers in both web and task pods, which one did you check?
Hello @fosterseth ,
I wasn't clear about my checks.
I use this command indeed : redis-cli -s /var/run/redis/redis.sock I checked for redis containers badly apparently... I checked the web one.
The metrics are completely different :
root@awx-fake-name-web-7b745964dc-p7czc:/data# redis-cli -s /var/run/redis/redis.sock
redis /var/run/redis/redis.sock> info clients
# Clients
connected_clients:30
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20559
client_recent_max_output_buffer:20504
blocked_clients:1
tracking_clients:0
clients_in_timeout_table:1
redis /var/run/redis/redis.sock>
root@awx-fake-name-task-c7dcdcbcf-f54ls:/data# redis-cli -s /var/run/redis/redis.sock
redis /var/run/redis/redis.sock> info clients
# Clients
connected_clients:1576
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20559
client_recent_max_output_buffer:0
blocked_clients:5
tracking_clients:0
clients_in_timeout_table:5
redis /var/run/redis/redis.sock>
Theses metrics and all the logs given before are related to our dev environment (both dev and prod environment are impacted by this type of issue).
Our production has ~3900 clients connected for the task's redis (same default 10K limit for maxclients).
What is the purpose of each redis ? Without running jobs, are these numbers of connection a normal situation ?
Thanks for your help
Hello,
Just checked our current metrics on redis and there is clearly an accumulation behavior for the task's redis client connections.
• Evolution since last week (Jul 20) AWX dev env : 1576 > 2172 AWX prod env : 3900 > 7617
Any idea of what could cause this ? (and hopefully of course a workaround/solution)
Thanks
@shanemcd @relrod seems we may be leaking open redis clients?
@sylvain-de-fuster thanks for this info, do you have a lot of users using the UI daily? AWX uses django channels via Redis backend to handle UI websocket connections. Wondering if there are unclosed websocket connections across browsers connected the AWX server.
@fosterseth ,
Below some details on our usage :
• AWX prod : ~580 users indexed in AWX interface • AWX dev : ~150 users indexed in AWX interface
AWX is mainly accessed "manually" by users but we have also some calls from jenkins/github/curl. I couldn't say (unless these are metrics I could check ?) how many users are using it daily (I think it is safe to cut at least in half the total indexed user number).
We are currently in a vacation period, so it reduce strongly the UI access. (nevertheless, the number of clients in redis increase pretty quickly).
Don't hesitate if there are some informations, tests to ask.
I want to investigate the subsystem_metrics broadcasting as well
@sylvain-de-fuster what is your AWX scaling look like, i.e. web and task replicas numbers
@fosterseth
Same configuration on both dev and production plateform. replicas field set to 1 so task/web deployments are scaled to 1.
I ran into this issue today, it was primarily affecting project syncs to Github, but we were also unable to run management jobs during the time we were impacted by this issue. Some of the error messages we were seeing are below (SEO):
redis.exceptions.ConnectionError: max number of clients reached
redis.exceptions.ConnectionError: Connection not ready
For those out there looking for a quick workaround:
(Optional) Prune the system:
minikube ssh -- docker system prune
List your pods - take note of the awx-task
container name:
kubectl get pods
Start a bash shell in the awx-task
pod. This should default to the redis
container:
kubectl exec --stdin --tty <awx-task-container-name> -- /bin/bash
Connect to redis-server via the unix socket from within the redis container inside of the awx-task pod:
redis-cli -s /var/run/redis/redis.sock
Set the client timeout to 300 seconds:
redis /var/run/redis/redis.sock> config set timeout 300
OK
redis /var/run/redis/redis.sock> config get timeout
1) "timeout"
2) "300"
(Optional) Run the Cleanup Expired Sessions
job within AWX.
Now attempt to sync a project, or run any kind of job. Your AWX should now be functioning again.
Hello,
Thanks a lot @jgrantls , your informations are very helpful ! Below my last tests with these infos in mind :
• Workaround
root@awx-fake-task-5fb8abc4c5-5f4pk:/data# redis-cli -s /var/run/redis/redis.sock
redis /var/run/redis/redis.sock> info clients
# Clients
connected_clients:2227
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:40960
client_recent_max_output_buffer:0
blocked_clients:5
tracking_clients:0
clients_in_timeout_table:5
redis /var/run/redis/redis.sock> config get timeout
1) "timeout"
2) "0"
redis /var/run/redis/redis.sock> config set timeout 300
OK
redis /var/run/redis/redis.sock> config get timeout
1) "timeout"
2) "300"
redis /var/run/redis/redis.sock> info clients
# Clients
connected_clients:36
cluster_connections:0
maxclients:10000
client_recent_max_input_buffer:20559
client_recent_max_output_buffer:0
blocked_clients:5
tracking_clients:0
clients_in_timeout_table:5
redis /var/run/redis/redis.sock> config set timeout 0
OK
redis /var/run/redis/redis.sock> config get timeout
1) "timeout"
2) "0"
redis /var/run/redis/redis.sock>
Note: The purge is made immediately after setting the timeout to 300.
• Check management jobs If I execute manually the jobs, it works but they don't seem to be related to the "connected_clients" information from redis (or the purge doesn't work properly) because the value stays unchanged.
For the record, the schedules are broken for Cleanup Expired Sessions
and Cleanup Expired OAuth 2 Tokens
Error if I try to go in the schedule page of the two jobs :
TypeError: Cannot read properties of undefined (reading 'sort')
TypeError: Cannot read properties of undefined (reading 'sort')
at oo (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:850657)
at Z9 (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:2495944)
at Ks (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:902957)
at Al (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:890198)
at Tl (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:890126)
at El (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:889989)
at yl (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:886955)
at https://awx-fake.local/static/js/main.5a8f0ae4.js:2:836449
at t.unstable_runWithPriority (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:932711)
at Vi (https://awx-fake.local/static/js/main.5a8f0ae4.js:2:836226)
If I edit the schedules, I can see that they have some missing parameters. The schedules doen't work.
I tried a fresh install and noticed that the two broken schedules are present (so I is not related to update/migration process)
any updates on this issue we are also facing similar issue in our environments.Same Redis error message.
Same issue. Fixed with config set timeout 300
and then running Cleanup Expired Sessions
@danielabelski i believe config set timeout 300 usually changes when we restart deployment.Just curious that was my thinking.
Same issue, updating the timeout also fixed, need to get this configuration persisted
Any ideas on how to get this configuration to persist? We have had to perform rollout restarts of deployments to resolve the issue (which from what I can see using the above info-clients
command, resets the connected_clients if it has reached 10,000).
Same issue occured on our platform.
awx-operator version : 2.5.1
Here is what I got when trying to get client info from redis on one of our awx-task pod :
root@awx-task-574c888989-4lxg2:/data# redis-cli -s /var/run/redis/redis.sock
redis /var/run/redis/redis.sock> info clients
ERR max number of clients reached
redis /var/run/redis/redis.sock> info clients
Error: Broken pipe
We have 3 task pods. That may explained why some jobs are still running as intended, but the issue prevented project updates and workflow / job executions as well.
+1 same issue occurred
Maybe in AWX Operator some settings of redis.conf
should be configurable (e.g. maxclients)
This issue occurred again on AWX 22.7.0
😞
Same fix from August is still working. The default timeout of 0
needs to be changed.
From the redis docs:
By default recent versions of Redis don't close the connection with the client if the client is idle for many seconds: the connection will remain open forever.
Leaving 10k clients connected forever seems to be unintended behavior as far as AWX is concerned, as it breaks the entire application.
https://github.com/ansible/awx/pull/15398 im adding a bit more debug information to hopefully track down how we are leaking the redis connection...
current theory is that it is related to
2024-07-11T13:32:51.432519957Z Task was destroyed but it is pending!
2024-07-11T13:32:51.432519957Z task: <Task pending name='Task-247757' coro=<RedisConnection._read_data() done, defined at /var/lib/awx/venv/awx/lib64/python3.9/site-packages/aioredis/connection.py:180> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7fba77bf1700>()]> cb=[RedisConnection.__init__.<locals>.<lambda>() at /var/lib/awx/venv/awx/lib64/python3.9/site-packages/aioredis/connection.py:168]>
(seems to be a correlation between that message and the increase of redis connection but its inconclusive)
the PR name all of the asyncio tasks so we can track down whats the offending tasks easier...
if anyone is willing to apply this patch to their system im happy to produce images
scanning through our code the only place that still uses aioredis is the old channel_redis version we pin to and newer version of channel_redis stop using aioredis (its a defuncted project)
upgrading channel_redis might help here https://github.com/ansible/awx/pull/15329
Please confirm the following
security@ansible.com
instead.)Bug Summary
Our environment :
We have recently issues on AWX. It seems random (maybe related with some sort of accumulation, or burst, or else). The web interface isn't working correctly (slow, some errors) and any project sync fails. Without the root cause, the only thing we can do is scale down web/task deployments so the operator restart them. Everything is ok after that.
We checked server's system (cpu/mem/disk/net/etc..) and logs without any luck. No specific action on the plateform. No high number of jobs running.
AWX version
22.1.0
Select the relevant components
Installation method
kubernetes
Modifications
no
Ansible version
2.14
Operating system
RHEL 8.6
Web browser
No response
Steps to reproduce
Didn't find a way to reproduce at will.
Expected results
The interface is smooth. Project sync works.
Actual results
Interface is slow to show and all project sync fails (so no job with outdated project cache can be executed)
Additional information
• Example of project sync error
Checked redis configuration (~20 clients connected for 10K limit). With AWX metrics, I found some differences but I can't properly interpret them (I guess some kind of burst but I don't really know).
• Metrics near platform issues
• Metrics with healthy platform