Closed dcowden closed 5 years ago
@tamalsaha Here's the test procedure:
Current Behavior (1) Start up a deployment having > 2 pods, which can send back the host name (pod name) in the response or a header. Add a preStop hook to the pods that waits 30 seconds before terminating. set gracePeriodSeconds to 30 seconds as well.
(2) create an ingress serving the deployment. using sticky sessions ( we use cookie mode)
(3) verify that haproxy shows both backends active, and that clients who transmit cookies get stuck to one of the backends
(4) terminate one of the pods. Observe that it takes 30 seconds to terminate ( while k8s fires the hooks). However, note that the backend is removed from haproxy as soon as it begins terminating. At that time, clients lose their sessions and are sent to the other pod
Correct Behavior
(1) add functionality requested to ingress controller, which watches terminating pods, and puts them in draining mode when they are terminating, rather than removing them.
(2) terminate one of the pods. Observe that it takes 30 seconds to terminate. However, note that during this time, the pod is shown as 'draining' in haproxy status. Observe that no new requests are sent to the terminating pods, but bound clients still go to the old one while its running
Other Notes In our use case, we use haproxy in cookie session mode, with the 'rewrite' option. Our application clears the cookie when a user logs out. When a pod starts terminating, it waits for user sessions to end. If a user logs out, we clear the cookie, and they are subsequently sent to a newer pod. The old pod stays terminating ( and thus serving bound clients), until it terminates. Our prestop hook watches the # of active sessions in tomcat, and terminates when it is zero, or when a timeout occurs ( about 12 hours)
@tamalsaha is there any changes since last year that would enable drain mode?
I'm not seeing anything that has been added. If there is not direct support, could it be possible to accomplish the flow via http://cbonte.github.io/haproxy-dconv/1.8/configuration.html#5.2-agent-check?
Hi @kfoozminus & @tamalsaha - we have the same requirements as @dcowden to support terminating pod draining (java/tomcat - hurray!). I see this issue was added to the v10.1.0 milestone last week - so it still planned on being in the next release? Is so that would be 💯 !
We have a legacy application ( tomcat/java), which needs sticky sessions. When we deploy new versions of our applications, we need to stop sending new connections to a server, while sending bound sessions to the old server. Please note: this is not referring to in-flight requests, we're needing the active tomcat sessions to expire, which normally takes a few hours.
This is possible using haproxy drain command. This will send bound connections to the old server, but send new ones elsewhere.
When kubernetes terminates a pod, it enters the TERMINATING status. When sticky sessions are enabled, the desired functionality is usually to put the associated pod into drain mode. This is nice because it allows the application preStop hooks to decide when its ready to stop. When the preStop hook terminates, the pod dies, and bound sessions end.
How could we accomplish this flow using voyager? We are currently using jcmoraisjr/haproxy-ingress, because it meets this requirement. ( In fact, the developers there were kind enough to add this feature. I'm sure their implementation could be ported, presuming no licensing issues exist.