$ cd apisonator
$ CONFIG_FILE=./openshift/3scale_backend.conf CONFIG_REDIS_ASYNC=1 CONFIG_WORKERS_LOG_FILE=/dev/stdout CONFIG_REDIS_PROXY="redis://127.0.0.1:7379" CONFIG_QUEUES_MASTER_NAME="redis://127.0.0.1:7379" RACK_ENV=production bundle exec 3scale_backend_worker run
The number of connections should be stable. There should not be logs about creating and dropping connections continuously. If the dropping connection issue happens, socat shows the client dropping connection continuously, connections are not reused. Example of a socat log showing dropped connection:
2021/12/02 12:54:34 socat[861] N socket 1 (fd 6) is at EOF
what
Upgrade async deps
Fixes https://issues.redhat.com/browse/THREESCALE-7864 Fixes https://github.com/3scale/apisonator/issues/308
Performed tests:
How check mem leak
Mem leak tests results
As it can bee seen, 2.12 image shows the memory increasing while other images show stable memory usage
How check connection management issue
To verify connection issue, explained here and here, a socat relay server was used as proxy to monitor connection management.
Deploy 3scale as explained above with async mode enabled in backend.
Downscale to 0 replicas the worker deployed in the cluster using the apimanager CR
Port forward locally to the backend redis service
Run socat monitoring connections (
-d -d
) and proxying in port 7379Run backend worker locally
dir: '/var/run/3scale'
dir: '/home/eguzki/tmp/3scale-backend-worker' }
$ cd apisonator $ CONFIG_FILE=./openshift/3scale_backend.conf CONFIG_REDIS_ASYNC=1 CONFIG_WORKERS_LOG_FILE=/dev/stdout CONFIG_REDIS_PROXY="redis://127.0.0.1:7379" CONFIG_QUEUES_MASTER_NAME="redis://127.0.0.1:7379" RACK_ENV=production bundle exec 3scale_backend_worker run
The number of connections should be stable. There should not be logs about creating and dropping connections continuously. If the dropping connection issue happens, socat shows the client dropping connection continuously, connections are not reused. Example of a socat log showing dropped connection: