Purpose-Built AWS Services for Diverse Workloads
Using purpose-built AWS services for workloads
Purpose-Built AWS Services for Diverse Workloads
This study guide focuses on the strategic selection of AWS services based on specific workload requirements. Moving beyond a "one-size-fits-all" approach, AWS offers specialized tools for compute, storage, and databases to optimize for performance, cost, and availability.
Learning Objectives
By the end of this guide, you should be able to:
- Distinguish between Infrastructure as a Service (IaaS) and Managed Services.
- Categorize compute options (EC2, Lambda, Containers) based on workload density and elasticity.
- Select appropriate storage types (Object, Block, File) for specific data access patterns.
- Match database engines (Relational vs. NoSQL) to application requirements.
- Identify hybrid and edge computing solutions for low-latency or on-premises requirements.
Key Terms & Glossary
- Managed Service: A service where AWS handles the underlying infrastructure, patching, and backups (e.g., RDS, DynamoDB).
- IaaS (Infrastructure as a Service): Services providing fundamental compute, networking, and storage where the user has maximum control (e.g., EC2).
- Serverless: A cloud-native development model that allows you to build and run applications without managing servers (e.g., AWS Lambda).
- RTO (Recovery Time Objective): The maximum acceptable delay between the interruption of service and restoration.
- RPO (Recovery Point Objective): The maximum acceptable amount of data loss measured in time.
The "Big Idea"
The core philosophy of modern AWS architecture is specialization. Instead of forcing every workload into a standard virtual machine, architects choose "purpose-built" services. For example, a high-frequency trading app might use EC2 with Cluster Placement Groups, while a simple image-processing task uses Lambda. This specialization ensures you only pay for the exact performance characteristics your application needs.
Formula / Concept Box
| Service Type | High Control | High Scaling/Ease | Best Use Case |
|---|---|---|---|
| EC2 (IaaS) | Highest | Manual/Auto Scaling | Legacy apps, custom OS needs |
| ECS/EKS (Containers) | High | Automated Orchestration | Microservices, porting Docker apps |
| Lambda (Serverless) | Low | Instant/Native | Event-driven, short-running tasks |
| Fargate | Medium | Managed Containerization | Running containers without managing EC2 |
Hierarchical Outline
- Compute Strategy
- Elastic Compute Cloud (EC2): Virtual servers (IaaS). Use for total control.
- AWS Lambda: Event-driven serverless compute. No server management.
- Containers (ECS/EKS): High server density; ideal for microservices.
- Storage Fundamentals
- Amazon S3 (Object): Scalable, durable storage for any data type.
- Amazon EBS (Block): High-performance volumes for EC2 instances.
- Amazon EFS/FSx (File): Shared file systems for Linux (EFS) or Windows (FSx).
- Database Solutions
- Amazon RDS: Managed relational databases (SQL).
- Amazon Aurora: High-performance, cloud-native relational DB.
- Amazon DynamoDB: Key-value NoSQL database for single-digit millisecond latency.
- Hybrid & Edge
- AWS Outposts: AWS infrastructure deployed on-premises.
- AWS Wavelength: 5G edge computing for ultra-low latency mobile apps.
Visual Anchors
Database Selection Flowchart
Global Infrastructure Model
\begin{tikzpicture} [region/.style={draw, thick, rectangle, minimum width=4cm, minimum height=3cm, fill=blue!10}, az/.style={draw, dashed, rectangle, minimum width=1.2cm, minimum height=1.5cm, fill=white}]
\node[region] (R1) at (0,0) {\textbf{AWS Region}}; \node[az] (AZ1) at (-1.1, -0.3) {AZ 1}; \node[az] (AZ2) at (1.1, -0.3) {AZ 2};
\draw[<->, thick] (AZ1) -- (AZ2) node[midway, above] {\small Low Latency}; \node[anchor=north] at (R1.south) {Physically Isolated Locations}; \end{tikzpicture}
Definition-Example Pairs
- Term: Spot Instances
- Definition: Spare compute capacity available at up to 90% discount.
- Example: A research university running large-scale data simulations that can be interrupted and resumed later.
- Term: Application Load Balancer (ALB)
- Definition: A Layer 7 load balancer that routes traffic based on content (path, host).
- Example: Routing traffic for
api.example.comto one target group andexample.com/imagesto another.
- Term: AWS Outposts
- Definition: A fully managed service that offers the same AWS infrastructure to virtually any data center.
- Example: A hospital needing to process patient data locally for regulatory compliance while using the AWS API.
Worked Examples
Example 1: The High-Bandwidth Video App
Scenario: A company is building an Augmented Reality (AR) app for mobile users on 5G networks requiring <10ms latency. Solution: Use AWS Wavelength. By hosting the compute-intensive AR processing within carrier 5G networks, the data bypasses the public internet, meeting the extreme latency requirement.
Example 2: The Variable Traffic Web Store
Scenario: A seasonal retailer experiences 100x traffic spikes during "Black Friday" but minimal traffic at night. Solution: Implement an Auto Scaling Group with EC2 instances behind an ALB. Alternatively, migrate to AWS Lambda and Amazon DynamoDB to automatically scale with every individual request, ensuring cost-optimization during idle periods.
Checkpoint Questions
- What is the primary difference between Amazon EBS and Amazon EFS in terms of connectivity?
- Which database service is most appropriate for a NoSQL workload requiring single-digit millisecond response times?
- Why would an architect choose AWS Fargate over managing their own EC2-based ECS cluster?
- Which load balancer should be used for ultra-high performance, low-latency traffic at Layer 4 (TCP/UDP)?
[!TIP] Remember: S3 is for Object storage (unlimited, web-accessible), while EBS is for Block storage (attached to a single EC2 instance, like a hard drive).
[!WARNING] Don't use RDS if you need access to the underlying OS of the database server; use EC2 with a manual DB installation instead.