AWS Lab: Identifying and Implementing Cost Optimization Opportunities
Identify opportunities for cost optimizations
AWS Lab: Identifying and Implementing Cost Optimization Opportunities
This lab focuses on the practical application of Content Domain 3.5 of the AWS Certified Solutions Architect - Professional (SAP-C02) exam. You will learn to identify underutilized resources and apply optimization strategies including rightsizing, storage tiering, and cost-allocation tagging.
[!WARNING] This lab involves modifying AWS resources. Ensure you follow the Teardown section at the end to avoid unexpected charges.
Prerequisites
- An active AWS Account.
- AWS CLI installed and configured with
AdministratorAccessor equivalent permissions. - Basic familiarity with EC2 and S3 services.
- Region: We will use
us-east-1(N. Virginia) for this lab.
Learning Objectives
- Implement Resource Tagging: Establish a cost-allocation strategy using tags.
- Identify & Rightsize: Simulate the identification of over-provisioned EC2 instances and modify them for cost efficiency.
- Automate Storage Tiering: Configure S3 Lifecycle policies to move data to lower-cost storage tiers.
- Analyze Trends: Use AWS Cost Explorer to visualize expenditure and usage patterns.
Architecture Overview
This lab demonstrates the flow from resource monitoring to active optimization.
Step-by-Step Instructions
Step 1: Establish Cost Allocation Tags
Tags are the foundation of cost visibility. We will tag an existing or new resource to track its cost center.
CLI Interface:
# Create a dummy EC2 instance for the lab if you don't have one
# Replace <SUBNET_ID> with a valid subnet ID from your VPC
aws ec2 run-instances \
--image-id ami-0c101f26f147fa7fd \
--count 1 \
--instance-type t3.micro \
--tag-specifications 'ResourceType=instance,Tags=[{Key=CostCenter,Value=Research},{Key=Project,Value=BrainyBee}]' \
--region us-east-1▶Console alternative
- Open the EC2 Console.
- Select Instances > Launch Instances.
- Scroll to Advanced Details > Tags.
- Add Tag: Key=
CostCenter, Value=Research. - Launch the instance.
Step 2: Simulate Rightsizing an EC2 Instance
In a real-world scenario, you would use AWS Compute Optimizer. Here, we will manually "rightsize" an instance from t3.medium (simulated) down to t3.nano to reduce hourly costs.
CLI Interface:
# 1. Stop the instance (replace <INSTANCE_ID> with your actual ID)
aws ec2 stop-instances --instance-ids <INSTANCE_ID>
# 2. Change instance type to a smaller size
aws ec2 modify-instance-attribute \
--instance-id <INSTANCE_ID> \
--instance-type "{\"Value\": \"t3.nano\"}"
# 3. Start the instance again
aws ec2 start-instances --instance-ids <INSTANCE_ID>[!TIP] Moving from x86 (t3) to ARM-based Graviton (t4g) often provides 40% better price/performance.
Step 3: Implement S3 Storage Tiering
To optimize storage costs for infrequently accessed data, we will apply a lifecycle policy.
CLI Interface:
Create a file named lifecycle.json:
{
"Rules": [
{
"ID": "MoveToIAAfter30Days",
"Status": "Enabled",
"Filter": {"Prefix": "logs/"},
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
}
]
}
1 ]
}Apply the policy:
aws s3api put-bucket-lifecycle-configuration \
--bucket <YOUR_UNIQUE_BUCKET_NAME> \
--lifecycle-configuration file://lifecycle.jsonCheckpoints
- Tagging: Run
aws ec2 describe-tags --filters "Name=resource-id,Values=<INSTANCE_ID>". Verify theCostCentertag is present. - Rightsizing: Check the EC2 console; the instance type should now be
t3.nano. - S3 Policy: Run
aws s3api get-bucket-lifecycle-configuration --bucket <YOUR_BUCKET_NAME>. It should return your JSON configuration.
Analysis: Cost vs. Performance
Understanding the "Sweet Spot" in rightsizing is critical for the SAP-C02 exam. The following graph illustrates the goal of rightsizing: finding the minimum cost that meets the performance threshold.
\begin{tikzpicture}[scale=0.8] % Axes \draw[->] (0,0) -- (6,0) node[right] {Performance (CPU/RAM)}; \draw[->] (0,0) -- (0,5) node[above] {\mbox{Cost ($)}};
% Cost Curve
\draw[thick, blue] (0.5,0.5) .. controls (2,1) and (4,2.5) .. (5.5,4.5);
\node[blue] at (5.8,4) {\mbox{Cost}};
% Minimum Performance Requirement
\draw[dashed, red] (3,0) -- (3,5) node[above] {\mbox{Threshold}};
% Optimal Point
\filldraw[black] (3,1.8) circle (2pt);
\draw[<-, thick] (3.1,1.7) -- (4.5,1) node[right] {\mbox{Optimal Rightsize Point}};\end{tikzpicture}
Troubleshooting
| Problem | Potential Cause | Fix |
|---|---|---|
InsufficientInstanceCapacity | The t3.nano is unavailable in the AZ. | Try a different AZ or use t3.micro. |
AccessDenied | IAM user lacks ce:* or ec2:ModifyInstanceAttribute permissions. | Attach the AdministratorAccess policy for the lab. |
| S3 Bucket name error | Bucket names must be globally unique. | Add a random suffix like brainybee-lab-cost-12345. |
Stretch Challenge
Automated Decommissioning: Create an AWS Lambda function that identifies instances with the tag Status: ToBeDecommissioned and stops them automatically at 6 PM daily using Amazon EventBridge.
Cost Estimate
- EC2 (t3.nano): ~$0.0052/hour. If run for 30 mins: $0.0026.
- S3 Standard Storage: First 50TB is $0.023/GB. For lab amounts: $0.00.
- AWS Cost Explorer: Free for the basic interface.
- Total Estimated Spend: <$0.01 USD.
Concept Review
| Strategy | Best For... | Key Constraint |
|---|---|---|
| Spot Instances | Fault-tolerant workloads (e.g., Batch processing) | Can be interrupted with 2-min notice. |
| Savings Plans | Consistent compute usage over 1-3 years | Commitment to a $/hour spend. |
| Rightsizing | Over-provisioned legacy migrations | Requires downtime to change instance types. |
| S3 Glacier | Long-term archival (Compliance logs) | Retrieval times range from minutes to hours. |
Clean-Up / Teardown
To avoid ongoing charges, run the following commands immediately:
# 1. Terminate the EC2 instance
aws ec2 terminate-instances --instance-ids <INSTANCE_ID>
# 2. Delete the S3 bucket (empty it first)
aws s3 rb s3://<YOUR_UNIQUE_BUCKET_NAME> --force
# 3. Deactivate any Billing Alarms created during exploration
aws cloudwatch delete-alarms --alarm-names "BudgetAlarm"