AWS Deployment Strategies: Blue/Green, Canary, and Rolling Releases
Configure deployment strategies (for example, blue/green, canary, rolling) for application releases
AWS Deployment Strategies: Blue/Green, Canary, and Rolling Releases
This guide covers the essential methods for deploying application updates on AWS, focusing on balancing availability, cost, and risk mitigation. These concepts are core to the AWS Certified Developer - Associate (DVA-C02) exam.
Learning Objectives
After studying this guide, you should be able to:
- Differentiate between Blue/Green, Canary, Rolling, and Immutable deployment strategies.
- Evaluate the trade-offs between deployment speed, infrastructure cost, and downtime.
- Configure traffic splitting for canary testing using AWS services like Elastic Beanstalk and AppConfig.
- Identify the correct rollback mechanism for each deployment type.
Key Terms & Glossary
- Blue/Green Deployment: A strategy that involves two identical environments. "Blue" is the current live version, and "Green" is the new version. Traffic is cut over once Green is verified.
- Canary Deployment: A technique where a small percentage of traffic is directed to the new version (the "canary") before rolling it out to the entire fleet.
- Rolling Update: An incremental deployment where instances are replaced or updated in small batches rather than all at once.
- Immutable Deployment: A strategy where new instances are launched in a fresh Auto Scaling Group rather than updating existing ones in place.
- Traffic Splitting: The process of dividing incoming requests between two or more versions of an application, often used for A/B testing or canary releases.
The "Big Idea"
In modern DevOps, the goal is to achieve Continuous Delivery without impacting the end-user experience. Deployment strategies are tools used to manage the "Blast Radius" of a new release. By choosing the right strategy, developers can ensure that if a new version contains a bug, it affects the fewest number of users possible and can be reverted nearly instantaneously.
Formula / Concept Box
| Strategy | Downtime | Infrastructure Cost | Rollback Speed | Best For |
|---|---|---|---|---|
| All-at-Once | High | Low | Slow | Dev/Test environments |
| Rolling | Minimal | Low | Slow | Large fleets where cost is a concern |
| Rolling + Batch | Zero | Medium | Medium | Production apps with high availability needs |
| Blue/Green | Zero | High (Temporary) | Instant | Critical production systems |
| Immutable | Zero | High (Temporary) | Fast | State-sensitive applications |
Hierarchical Outline
- I. In-Place Deployment Strategies
- All-at-Once: Replaces code on all instances simultaneously; requires downtime.
- Rolling: Updates instances in batches; capacity is reduced during the process.
- Rolling with Additional Batch: Adds new instances first to maintain 100% capacity during the update.
- II. Side-by-Side Deployment Strategies
- Blue/Green: Full environment duplication; switch via DNS or Load Balancer.
- Immutable: New Auto Scaling Group created; traffic shifts once health checks pass.
- III. Traffic Shifting & Monitoring
- Canary/Traffic Splitting: Incremental shifting (e.g., 10% -> 20% -> 100%).
- Health Checks: Mandatory automated tests that determine if a deployment should continue or roll back.
Visual Anchors
Blue/Green Transition Flow
Canary Traffic Distribution
\begin{tikzpicture} % Load Balancer \draw[thick] (0,2) rectangle (2,3) node[midway] {ALB / Router};
% Traffic flows
\draw[->, thick] (1,2) -- (1,0) node[below] {90\% Traffic (Stable v1)};
\draw[->, thick, dashed] (2,2.5) -- (4,1) node[right] {10\% Traffic (Canary v2)};
% Environment boxes
\draw (0,-1) rectangle (2,0);
\draw (3.5,0) rectangle (5.5,1);
% Labels
\node at (1,-0.5) {Production Fleet};
\node at (4.5,0.5) {Canary Instance};\end{tikzpicture}
Definition-Example Pairs
- Strategy: Rolling with Additional Batch
- Definition: A deployment where AWS adds a temporary set of instances to handle traffic while the existing instances are upgraded in batches. This ensures no loss of capacity.
- Example: An e-commerce site running on 10 EC2 instances needs an update. AWS launches 2 new instances with v2 code, then updates 2 old instances, keeping a minimum of 10 active at all times.
- Strategy: Immutable Deployment
- Definition: Instead of modifying existing instances, a completely new set of instances (a new Auto Scaling Group) is launched. Once healthy, traffic is moved and the old group is deleted.
- Example: A financial application where server state must be clean. A new ASG is spun up with the latest AMI; if any test fails, the old ASG remains untouched and the new one is simply deleted.
Worked Examples
Scenario 1: Cost-Optimized Update
Problem: A startup needs to update their internal reporting tool. They want zero downtime but cannot afford to double their EC2 costs, even for an hour. Solution: Use a Rolling Update. While this slightly reduces performance during the update (as some instances go offline to be patched), it avoids the cost of provisioning a whole second environment (Blue/Green).
Scenario 2: High-Stakes Feature Launch
Problem: A social media app is launching a new algorithm. They are worried about how it will perform under real load and want to test it on a subset of users first. Solution: Use Traffic Splitting (Canary). Configure AWS Elastic Beanstalk or AppConfig to send 5% of traffic to the new version. Monitor CloudWatch logs for errors. If the error rate stays low for 30 minutes, increase traffic to 25%, then 50%, and finally 100%.
Checkpoint Questions
- Which deployment strategy involves the highest temporary infrastructure cost?
- In a Rolling deployment, what happens if the first batch of instances fails its health check?
- How does "Rolling with Additional Batch" differ from a standard "Rolling" update in terms of capacity?
- What AWS service feature is typically used to perform a Blue/Green deployment at the DNS level?
- True or False: An "All-at-Once" deployment is recommended for production environments requiring high availability.
[!TIP] For the DVA-C02 exam, remember that Lambda Aliases are the primary way to implement Canary deployments for serverless functions, while Elastic Beanstalk provides built-in settings for all the strategies mentioned above.