AWS Global Services Study Guide: CloudFront, Global Accelerator, & Edge Computing
Global service offerings (for example, AWS Global Accelerator, Amazon CloudFront, edge computing services)
AWS Global Services Study Guide: CloudFront, Global Accelerator, & Edge Computing
This guide explores the mechanisms AWS provides to deliver content and applications to a global audience with minimum latency and maximum reliability. It focuses on the strategic use of the AWS Global Network to optimize user experience regardless of geographic location.
Learning Objectives
After studying this guide, you should be able to:
- Differentiate between Amazon CloudFront and AWS Global Accelerator based on OSI layers and use cases.
- Identify how edge computing services like Lambda@Edge enhance content delivery.
- Explain the performance benefits of entering the AWS Global Backbone as early as possible.
- Select the appropriate global service for specific application types (e.g., static web vs. real-time gaming).
Key Terms & Glossary
- Edge Location: A site that CloudFront uses to cache copies of your content for faster delivery to users at any location.
- Origin: The source of truth for your content (e.g., an S3 bucket or an EC2 instance) from which CloudFront gets its files.
- Anycast IP: A network addressing and routing method in which incoming requests can be routed to a variety of different nodes, but all share the same IP address.
- Network Hop: Each step or intermediate device (like a router) through which data passes between source and destination.
- AWS Backbone: The private, high-speed fiber network managed by AWS that connects AWS Regions and Edge Locations.
The "Big Idea"
The core philosophy of AWS global services is Latency Reduction through Proximity. By moving either the content (CloudFront) or the entry point (Global Accelerator) closer to the user, AWS bypasses the congested and unpredictable public internet. This ensures that data travels the shortest possible distance over the unmanaged internet before hitting the optimized, private AWS network.
Formula / Concept Box
| Feature | Amazon CloudFront | AWS Global Accelerator |
|---|---|---|
| Primary Goal | Content Delivery & Caching | Network Path Optimization |
| OSI Layer | Layer 7 (HTTP/HTTPS) | Layer 3/4 (IP/TCP/UDP) |
| IP Address | Dynamic (DNS-based) | Static Anycast IPs |
| Main Benefit | Caching static/dynamic content | Reducing network hops/latency |
| Protocol Support | HTTP, HTTPS, WebSockets | TCP, UDP |
Hierarchical Outline
- I. Amazon CloudFront (Content Delivery Network)
- Caching Mechanism: Reduces load on Origins by storing content at Edge Locations.
- Regional Edge Caches: Sit between Edge Locations and Origins to further reduce origin fetch requests.
- Security: Integrates with AWS WAF and AWS Shield for edge-level protection.
- II. AWS Global Accelerator
- Static Anycast IPs: Provides two fixed IP addresses that do not change, simplifying DNS and firewall management.
- Network Entry: Traffic enters the AWS network at the Edge Location closest to the user.
- Traffic Dial: Allows weight-based routing to different regions for testing or migration.
- III. Edge Computing (Lambda@Edge)
- Event-Driven: Executes code in response to CloudFront events (Viewer Request, Origin Request, etc.).
- Use Cases: A/B testing, header manipulation, and personalized content generation at the edge.
Visual Anchors
CloudFront Traffic Flow
Network Path Optimization (TikZ)
\begin{tikzpicture}[node distance=2cm, font=\small] \draw[thick, gray, dashed] (0,0) -- (10,0) node[anchor=north west] {Public Internet}; \draw[thick, orange] (2,2) -- (10,2) node[anchor=south west] {AWS Private Backbone};
% Labels
\node (User) at (0,1) [draw, circle] {User};
\node (Edge) at (2,1) [draw, rectangle, fill=blue!10] {Edge Point};
\node (Region) at (9,1) [draw, rectangle, fill=orange!20] {AWS Region};
% Standard Route
\draw[->, red, thick] (User) .. controls (4, -1) and (7, -1) .. (Region);
\node[red] at (5.5, -0.8) {High Latency (Many Hops)};
% Optimized Route
\draw[->, green!60!black, thick] (User) -- (Edge);
\draw[->, green!60!black, thick] (Edge) -- (2,2) -- (9,2) -- (Region);
\node[green!60!black] at (5.5, 2.3) {Global Accelerator Path (Low Latency)};\end{tikzpicture}
Definition-Example Pairs
- Caching: Storing a copy of a file in a temporary storage location.
- Example: An e-commerce site stores its
logo.pngat a Tokyo edge location so a user in Japan doesn't have to fetch it from the origin in Virginia.
- Example: An e-commerce site stores its
- Edge Logic: Running code at the edge of the network rather than the central server.
- Example: Using Lambda@Edge to inspect a user's cookie and redirect them to a language-specific version of a site (e.g.,
/en/or/fr/) without hitting the origin server.
- Example: Using Lambda@Edge to inspect a user's cookie and redirect them to a language-specific version of a site (e.g.,
- Static Anycast IP: A fixed IP address that routes traffic to the nearest healthy endpoint.
- Example: A global gaming company gives players two static IPs for their client software; the players connect to the nearest AWS data center automatically.
Worked Examples
Scenario 1: High-Performance Media Streaming
Problem: A video-on-demand service experiences high buffering for users in Australia while the origin is in US-East-1. Solution:
- Deploy Amazon CloudFront.
- Configure the S3 bucket as the origin.
- The first Australian user triggers an Origin Request; the video is cached in Sydney.
- Subsequent users in Australia experience near-instant playback from the Edge Location cache.
Scenario 2: Global Financial API (UDP Protocol)
Problem: A banking application uses a custom UDP-based protocol for high-speed stock data. They need a single entry point for firewall whitelisting but want minimal latency globally. Solution:
- Deploy AWS Global Accelerator.
- Assign the provided Static Anycast IPs to the API endpoints.
- Traffic enters the AWS backbone at the local edge, bypassing internet congestion.
- Since Global Accelerator supports non-HTTP protocols (UDP), it functions where CloudFront cannot.
Checkpoint Questions
- Which service should you choose if you need to provide two static IP addresses for your application's global entry point?
- True or False: Amazon CloudFront can cache dynamic content as well as static content.
- At which point in the request lifecycle can a Lambda@Edge function be triggered?
- Why does Global Accelerator improve performance even if there is no caching involved?
Muddy Points & Cross-Refs
- CloudFront vs. Accelerator: This is the most common confusion. Remember: CloudFront = Caching (L7); Global Accelerator = Routing (L3/4). If you need content stored locally, use CloudFront. If you need a faster, more reliable path for raw traffic, use Accelerator.
- Lambda@Edge vs. CloudFront Functions: Lambda@Edge is for heavy processing (e.g., external API calls); CloudFront Functions are for ultra-lightweight, high-scale tasks (e.g., URL rewrites).
- Cross-Refs: See Route 53 Geoproximity Routing for DNS-based global traffic management.
Comparison Tables
When to use which?
| Requirement | Use CloudFront | Use Global Accelerator |
|---|---|---|
| HTTP/S Traffic | ✅ | ✅ |
| TCP/UDP Traffic | ❌ | ✅ |
| Caching Files | ✅ | ❌ |
| Static IP Requirements | ❌ | ✅ |
| Edge Compute Support | ✅ | ❌ |
| Origin is outside AWS | ✅ | ✅ |