Open rtyley opened 1 month ago
Thanks for raising this issue.
In this case, had
ResumeAlarmNotifications
been enabled immediately beforeWaitForStabilization
I think trying to run ResumeAlarmNotifications
earlier in the process[^1] is a good idea. I agree that it would've helped to mitigate the impact of this particular incident. It would also shorten the window where apps cannot scale during successful deployments[^2], which might help with performance more generally.
In order for this to work I think it would be desirable to replace the final WaitForStabilization
task with a new task (WaitForOldInstancesToTerminate
, or similar).
The current task is checking for an expected number of instances. This is currently an acceptable way to check because Riff-Raff is essentially holding a lock on the desired capacity setting by blocking scaling operations (i.e. we know that the number won't change).
Once we re-enable scaling the desired number of instances becomes a moving target, so I think it would be better for Riff-Raff to check that there are 0 instances with the Magenta=Terminate
tag still running in the ASG. This would allow us to be sure that all instances are now running the build currently being deployed[^3], which means it is safe/correct to mark the deployment as successful.
[^1]: I think we could probably re-enable scaling after CullInstancesWithTerminationTag
if we implemented some of the other changes described in this comment.
[^2]: It's a shame that we can't deploy and scale up at the same time, but I think making that possible requires a major architecture change in Riff-Raff or in the way that we set up EC2 apps (e.g. we could do this if all apps had 2 separate ASGs and used a blue/green deployment strategy).
[^3]: Some of these might have been launched by an ASG scaling action, rather than Riff-Raff's actions, but it doesn't really matter at this point. We want all requests to go to instances running the new build, as we are already confident that the new build passes the healthcheck before we start terminating the old instances.
I think trying to run
ResumeAlarmNotifications
earlier in the process is a good idea.
Thanks! I've just opened draft PR https://github.com/guardian/riff-raff/pull/1345 to do this - your excellent feedback in your comment has given me some more stuff to think about!
I think a lot of our manual orchestration of ASGs during a deployment can be swapped for instance refresh today.
Instance refresh is different from our current process:
We'd likely want to keep our current pre-flight check requiring the number of healthy instances in the load balancer to match the ASG.
I think blue/green deployment, as @jacobwinch describes, is the end-goal to seek. However I wonder if adopting instance refresh solves the issues witnessed, whilst requiring fewer changes compared to those needed to support (and migrate to) multiple ASG/LB, and DNS swapping during a deployment.
Lastly, instance refresh is an AWS native capability, meaning there's less code for us to maintain.
[^1]: I'd be curious to see how checkpointing impacts deployment times.
our manual orchestration of ASGs during a deployment can be swapped for instance refresh today.
I'd not heard of Instance Refresh (tho' apparently it was introduced in 2020!), but having read about it, it does sound good!
It looks like Instance Refresh is a way of rolling out launch template configuration updates, which means it can roll out new EC2 instances to use whatever new AMI Id and User Data are in the new launch templates. This feels like maybe we would want to move away from the current deployment model of just downloading whatever artifact is on S3 at a particular path (eg s3://ophan-dist/ophan/PROD/tracker/tracker_0.1.0-SNAPSHOT_all.deb
, where that version number 0.1.0
never changes), and maybe instead make the User Data specific to downloading a specific version of the artifact from S3 (perhaps directly from the 'riffraff-artifact' bucket?). This would make the rollback feature of Instance Refresh effective, as there's no way that Instance Refresh rollback can work unless the launch template fully dictates what software runs on the instance.
I think blue/green deployment, as @jacobwinch describes, is the end-goal to seek.
Fair enough- so it sounds like we're talking about something like this?
autoscaling
deploys better
Since https://github.com/guardian/riff-raff/pull/83 back in April 2013, Riff Raff
autoscaling
deploys have always disabled ASG scaling alarms at the start of a deploy (SuspendAlarmNotifications
), and only re-enabled them at the end of the deploy, once deployment has successfully completed:https://github.com/guardian/riff-raff/blob/60eb09f08db8806a42e1df2e2d666fc1004a513d/magenta-lib/src/main/scala/magenta/deployment_type/AutoScaling.scala#L170-L205
There are good reasons for this, but it leads to two problems:
For apps where sudden unpredictable bursts of traffic can occur, where many deploys can take place every day, this adds up to significant windows of time where the odds are eventually that a deploy will coincide with a spike in traffic that they are unable to respond to.
Ophan Tracker outage - 22nd May 2024
full incident summary
16:04 - Ophan PR #6109, a minor change to the Ophan Dashboard, is merged. This will trigger a deploy of all Ophan apps, including the Ophan Tracker.
16:11 - App Notification for major news story Rishi Sunak will call general election for July this afternoon in surprise move, senior sources tell the Guardian is sent out:![image](https://github.com/guardian/riff-raff/assets/52038/cf5a4c08-ed5e-493a-ab6b-4210b1a547bf)
16:12:02 - Riff Raff deploy disables auto-scaling alarms, with the size of the ASG set to 3 instances
16:13:32 - Ophan Tracker's scale-up alarm enters ALARM status. The Tracker ASG would normally scale up on 2 consecutive ALARM states 1 minute apart, but ASG scale-up has been disabled by the deploy.![image](https://github.com/guardian/riff-raff/assets/52038/bb2b114a-c2ba-4df3-a036-249ef54e6bf3)
16:14:26 - Riff Raff deploy culls the 3 old instances, taking the ASG size back to 3 instances - the cluster is now very under-scaled for the spike in traffic
16:14:37 - Riff Raff deploy starts the final
WaitForStabilization
, which is the last step before re-enabling alarms. Due to the servers being so overloaded, they never stabilise. The step has a 15 minute timeout.16:29:42 - The deploy finally fails as
WaitForStabilization
times out, and the alarms are left disabled.17:19:30 – Tracker ASG is manually scaled up to 6 instances by the Ophan team
17:23:12 – Tracker ASG stops terminating unhealthy instances - the outage has lasted just over 1 hour
17:30:41 - Alarms are finally re-enabled by the Ophan team performing a new deploy
In this case, had
ResumeAlarmNotifications
been enabled immediately beforeWaitForStabilization
, the deploy would have failed, but the outage would probably have ended within a minute or 2 of 16:14, giving a 2 minute outage, rather than a 1 hour outage.