guardian / riff-raff

The Guardian's deployment platform
Apache License 2.0
265 stars 18 forks source link

Apps can not auto-scale until an `autoscaling` deploy has successfully completed #1342

Open rtyley opened 1 month ago

rtyley commented 1 month ago

Since https://github.com/guardian/riff-raff/pull/83 back in April 2013, Riff Raff autoscaling deploys have always disabled ASG scaling alarms at the start of a deploy (SuspendAlarmNotifications), and only re-enabled them at the end of the deploy, once deployment has successfully completed:

https://github.com/guardian/riff-raff/blob/60eb09f08db8806a42e1df2e2d666fc1004a513d/magenta-lib/src/main/scala/magenta/deployment_type/AutoScaling.scala#L170-L205

There are good reasons for this, but it leads to two problems:

For apps where sudden unpredictable bursts of traffic can occur, where many deploys can take place every day, this adds up to significant windows of time where the odds are eventually that a deploy will coincide with a spike in traffic that they are unable to respond to.

Ophan Tracker outage - 22nd May 2024

full incident summary

In this case, had ResumeAlarmNotifications been enabled immediately before WaitForStabilization, the deploy would have failed, but the outage would probably have ended within a minute or 2 of 16:14, giving a 2 minute outage, rather than a 1 hour outage.

jacobwinch commented 1 month ago

Thanks for raising this issue.

In this case, had ResumeAlarmNotifications been enabled immediately before WaitForStabilization

I think trying to run ResumeAlarmNotifications earlier in the process[^1] is a good idea. I agree that it would've helped to mitigate the impact of this particular incident. It would also shorten the window where apps cannot scale during successful deployments[^2], which might help with performance more generally.

In order for this to work I think it would be desirable to replace the final WaitForStabilization task with a new task (WaitForOldInstancesToTerminate, or similar).

The current task is checking for an expected number of instances. This is currently an acceptable way to check because Riff-Raff is essentially holding a lock on the desired capacity setting by blocking scaling operations (i.e. we know that the number won't change).

Once we re-enable scaling the desired number of instances becomes a moving target, so I think it would be better for Riff-Raff to check that there are 0 instances with the Magenta=Terminate tag still running in the ASG. This would allow us to be sure that all instances are now running the build currently being deployed[^3], which means it is safe/correct to mark the deployment as successful.

[^1]: I think we could probably re-enable scaling after CullInstancesWithTerminationTag if we implemented some of the other changes described in this comment.

[^2]: It's a shame that we can't deploy and scale up at the same time, but I think making that possible requires a major architecture change in Riff-Raff or in the way that we set up EC2 apps (e.g. we could do this if all apps had 2 separate ASGs and used a blue/green deployment strategy).

[^3]: Some of these might have been launched by an ASG scaling action, rather than Riff-Raff's actions, but it doesn't really matter at this point. We want all requests to go to instances running the new build, as we are already confident that the new build passes the healthcheck before we start terminating the old instances.

rtyley commented 1 month ago

I think trying to run ResumeAlarmNotifications earlier in the process is a good idea.

Thanks! I've just opened draft PR https://github.com/guardian/riff-raff/pull/1345 to do this - your excellent feedback in your comment has given me some more stuff to think about!

akash1810 commented 1 month ago

I think a lot of our manual orchestration of ASGs during a deployment can be swapped for instance refresh today.

Instance refresh is different from our current process:

We'd likely want to keep our current pre-flight check requiring the number of healthy instances in the load balancer to match the ASG.

I think blue/green deployment, as @jacobwinch describes, is the end-goal to seek. However I wonder if adopting instance refresh solves the issues witnessed, whilst requiring fewer changes compared to those needed to support (and migrate to) multiple ASG/LB, and DNS swapping during a deployment.

Lastly, instance refresh is an AWS native capability, meaning there's less code for us to maintain.

[^1]: I'd be curious to see how checkpointing impacts deployment times.

rtyley commented 1 month ago

our manual orchestration of ASGs during a deployment can be swapped for instance refresh today.

I'd not heard of Instance Refresh (tho' apparently it was introduced in 2020!), but having read about it, it does sound good!

It looks like Instance Refresh is a way of rolling out launch template configuration updates, which means it can roll out new EC2 instances to use whatever new AMI Id and User Data are in the new launch templates. This feels like maybe we would want to move away from the current deployment model of just downloading whatever artifact is on S3 at a particular path (eg s3://ophan-dist/ophan/PROD/tracker/tracker_0.1.0-SNAPSHOT_all.deb, where that version number 0.1.0 never changes), and maybe instead make the User Data specific to downloading a specific version of the artifact from S3 (perhaps directly from the 'riffraff-artifact' bucket?). This would make the rollback feature of Instance Refresh effective, as there's no way that Instance Refresh rollback can work unless the launch template fully dictates what software runs on the instance.

I think blue/green deployment, as @jacobwinch describes, is the end-goal to seek.

Fair enough- so it sounds like we're talking about something like this?

  1. Short term, do something like https://github.com/guardian/riff-raff/pull/1345 to make current autoscaling deploys better
  2. Mid-term, adopt Instance Refresh, with launch templates that fully dictate what software runs on the instance
  3. Long-term, blue/green deployment