Optimizing Global Architectures with AWS Edge Services
Design a solution that incorporates edge network services to optimize user performance and traffic management for global architectures
Optimizing Global Architectures with AWS Edge Services
This study guide focuses on designing high-performance, global architectures using AWS edge services, primarily Amazon CloudFront and AWS Global Accelerator, as required for the AWS Certified Advanced Networking Specialty (ANS-C01).
Learning Objectives
After studying this guide, you should be able to:
- Evaluate global traffic requirements to select between content distribution (CloudFront) and network path optimization (Global Accelerator).
- Design architectures that integrate edge services with Elastic Load Balancing (ELB) and Amazon API Gateway.
- Implement caching strategies to reduce origin load and minimize latency.
- Analyze the impact of Anycast IP addressing on global traffic management.
Key Terms & Glossary
- Edge Location: A site that CloudFront uses to cache copies of your content for closer proximity to users.
- Anycast IP: A network addressing and routing methodology in which a single IP address is shared by multiple devices in different locations; used by Global Accelerator to route users to the nearest entry point.
- Origin: The source of truth for your content (e.g., S3 bucket, EC2 instance, or ELB) that CloudFront pulls from when a cache miss occurs.
- TTL (Time to Live): The duration for which a record or cached object remains valid before it must be refreshed from the origin.
- Regional Edge Cache: Larger caches located between CloudFront edge locations and your origin to further reduce origin load.
The "Big Idea"
[!IMPORTANT] The core objective of edge networking is to bypass as much of the public internet as possible. By using AWS's private global fiber network, you reduce "jitter," packet loss, and latency, providing a consistent experience for users regardless of their geographic distance from your primary data center.
Formula / Concept Box
| Concept | Layer | Primary Mechanism | Best For... |
|---|---|---|---|
| Amazon CloudFront | Layer 7 | Caching (HTTP/HTTPS) | Static/Dynamic web content, video streaming |
| AWS Global Accelerator | Layer 4 | Anycast / Path Optimization | TCP/UDP, gaming, VoIP, non-HTTP traffic |
| Route 53 (Latency) | Layer 7+ | DNS Resolution | Directing users to specific regional endpoints via DNS |
Hierarchical Outline
- Content Delivery Networks (Amazon CloudFront)
- Caching Strategy: Using TTLs and Cache Keys to optimize hit ratios.
- Security at the Edge: Integration with AWS WAF and Shield.
- Dynamic Content: Using Lambda@Edge or CloudFront Functions for request/response manipulation.
- Global Traffic Management (AWS Global Accelerator)
- Static Anycast IPs: Providing two static IPs that never change, simplifying firewall allow-lists.
- Traffic Dial: Controlling the percentage of traffic sent to specific AWS regions.
- Health Checking: Automatic failover between regions in under 30 seconds.
- Integration Patterns
- Edge + ELB: Protecting internal ALBs by making them accessible only via CloudFront IPs.
- Multi-Region Failover: Combining Route 53 health checks with Global Accelerator for high availability.
Visual Anchors
CloudFront Caching Flow
Global Accelerator Anycast Routing
\begin{tikzpicture}[node distance=2cm, every node/.style={draw, thick, align=center}] \node (User) [circle, fill=blue!10] {Global\User}; \node (Edge) [right of=User, xshift=2cm, rounded corners, fill=green!10] {AWS Edge$Anycast IP)}; \node (Backbone) [right of=Edge, xshift=2cm, fill=orange!10] {AWS Global\Network}; \node (Region1) [above right of=Backbone, xshift=2cm] {Region A$Primary)}; \node (Region2) [below right of=Backbone, xshift=2cm] {Region B$Failover)};
\draw[->, >=stealth] (User) -- (Edge) node[midway, above] {Public Internet};
\draw[->, >=stealth] (Edge) -- (Backbone) node[midway, above] {Entry};
\draw[->, >=stealth] (Backbone) -- (Region1);
\draw[->, >=stealth] (Backbone) -- (Region2);
\node[draw=none, below of=Backbone, yshift=1cm] {\textit{Optimized Path}};\end{tikzpicture}
Definition-Example Pairs
- Static Content Caching: Storing unchanging files like images or CSS at the edge.
- Example: A global news site caches its logo () at 400+ edge locations so a user in Tokyo downloads it from Tokyo, not New York.
- Dynamic Acceleration: Speeding up non-cacheable content by keeping connections open to the origin.
- Example: A shopping cart API uses CloudFront; while the JSON response isn't cached, the SSL handshake happens at the edge, reducing the time to establish a secure connection.
- IP Whitelisting Simplification: Using fixed entry points for global apps.
- Example: A corporate office with strict firewall rules uses Global Accelerator so they only have to whitelist two specific IPs for their global ERP system.
Worked Examples
Problem: Global Real-Time Gaming
Scenario: A developer is launching a UDP-based multiplayer game. Players are located in Europe and North America. The game servers are in us-east-1 and eu-central-1.
Solution Strategy:
- Select Service: Choose AWS Global Accelerator. CloudFront is discarded because it is primarily for HTTP/HTTPS (L7) and does not support UDP traffic acceleration in the same way.
- Configuration: Deploy the game servers behind Network Load Balancers (NLB) in both regions.
- Global Accelerator Setup: Create an accelerator and add both NLBs as endpoints.
- Traffic Management: Set the "Traffic Dial" to 100 for both. Global Accelerator will automatically route players to the closest healthy NLB using Anycast IPs.
Checkpoint Questions
- What are the two primary benefits of using CloudFront for static assets? (Answer: Reduced latency and reduced origin bandwidth costs).
- True or False: Global Accelerator provides caching for HTTP responses. (Answer: False; that is a CloudFront function).
- How does Global Accelerator handle a regional outage? (Answer: It detects the failure via health checks and re-routes Anycast traffic to the next closest healthy region).
- Which service would you use to accelerate an application that uses a custom protocol on TCP port 8080? (Answer: AWS Global Accelerator).
Muddy Points & Cross-Refs
- CloudFront vs. Accelerator for APIs: If your API is purely RESTful (HTTPS), CloudFront is often preferred for its caching and WAF integration. If your API is extremely latency-sensitive and non-HTTP, use Global Accelerator.
- TTL vs. Invalidation: Invalidation forces an object out of the cache but costs money after the first 1,000 paths per month. It is usually better to use versioned filenames () than frequent invalidations.
- Cross-Ref: See Chapter 2 (Route 53) for how to use Latency-Based Routing as an alternative to Global Accelerator.
Comparison Tables
Comparison: CloudFront vs. Global Accelerator
| Feature | Amazon CloudFront | AWS Global Accelerator |
|---|---|---|
| Traffic Type | HTTP / HTTPS (Layer 7) | TCP / UDP (Layer 4) |
| Caching | Yes (Edge & Regional) | No |
| IP Addresses | Dynamic (DNS-based) | Static (2 Anycast IPs) |
| Main Goal | Content Delivery & Caching | Network Path Optimization |
| DDoS Protection | AWS Shield Standard/Advanced | AWS Shield Standard/Advanced |
| Custom Logic | Lambda@Edge / Functions | None (Network routing only) |