Hands-On Lab923 words

Hands-On Lab: Determining High-Performing and Scalable AWS Storage Solutions

Determine high-performing and/or scalable storage solutions

Hands-On Lab: Determining High-Performing and Scalable AWS Storage Solutions

Welcome to this guided lab. In the AWS Certified Solutions Architect - Associate (SAA-C03) exam, Domain 3 focuses on designing high-performing architectures. A key task is determining the right storage service (Object, Block, or File) and optimizing it for both scale and speed.

In this lab, you will configure two distinct AWS storage solutions: an Amazon S3 bucket (Object Storage) optimized for global data ingestion, and an Amazon EBS gp3 volume (Block Storage) independently tuned for high IOPS and throughput.


Prerequisites

Before starting this lab, ensure you have the following:

  • AWS Account: An active AWS account with administrative access.
  • CLI Tools: The AWS CLI (aws) installed and configured on your local machine.
  • IAM Permissions: Permissions to create and manage S3 buckets and EC2 volumes.
  • Knowledge: Basic understanding of Object Storage vs. Block Storage.

Learning Objectives

By completing this lab, you will be able to:

  1. Provision an Amazon S3 bucket and enable Transfer Acceleration for scalable, high-performing global uploads.
  2. Provision an Amazon EBS gp3 volume and independently scale its IOPS and throughput.
  3. Differentiate between the configuration mechanisms for object versus block storage performance.

Architecture Overview

The following diagram illustrates the two separate storage components you will be provisioning. S3 uses edge locations for rapid data ingestion, while EBS attaches to a specific Availability Zone for high-speed local compute access.

Loading Diagram...

Storage Type Characteristics

Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds

Step-by-Step Instructions

Step 1: Create a Scalable S3 Bucket

First, we will create an Amazon S3 bucket. S3 is a highly scalable object storage service designed for 99.999999999% (11 9s) of durability.

Run the following CLI command. Replace <YOUR_ACCOUNT_ID> with your actual AWS account ID or a random number to ensure the bucket name is globally unique.

bash
aws s3api create-bucket \ --bucket brainybee-storage-lab-<YOUR_ACCOUNT_ID> \ --region us-east-1

[!TIP] S3 bucket names must be globally unique across all AWS accounts. If you receive a BucketAlreadyExists error, change the number suffix and try again.

Console alternative
  1. Navigate to the S3 Console.
  2. Click Create bucket.
  3. Enter brainybee-storage-lab-<YOUR_ACCOUNT_ID> as the Bucket name.
  4. Select US East (N. Virginia) us-east-1 as the AWS Region.
  5. Leave all other settings as default and click Create bucket.

📸 Screenshot: Creating an S3 bucket in the AWS Console.

Step 2: Enable S3 Transfer Acceleration

To meet high-performance requirements for users uploading data from around the world, we will enable S3 Transfer Acceleration. This feature routes traffic through Amazon CloudFront's globally distributed Edge Locations.

bash
aws s3api put-bucket-accelerate-configuration \ --bucket brainybee-storage-lab-<YOUR_ACCOUNT_ID> \ --accelerate-configuration Status=Enabled

[!NOTE] S3 Transfer Acceleration incurs additional charges only if it successfully speeds up the transfer compared to standard internet routing.

Console alternative
  1. In the S3 Console, click on your newly created bucket.
  2. Navigate to the Properties tab.
  3. Scroll down to the Transfer acceleration section and click Edit.
  4. Select Enable and click Save changes.

📸 Screenshot: Enabling S3 Transfer Acceleration in bucket properties.

Step 3: Provision a High-Performing EBS gp3 Volume

Next, we'll design a high-performing block storage solution. Legacy gp2 volumes tied performance directly to storage size. The newer gp3 volumes allow us to independently scale IOPS and throughput, offering up to 20% lower price per GB.

We will create a small 50 GB volume but explicitly provision it with 5,000 IOPS and 250 MiB/s throughput.

bash
aws ec2 create-volume \ --volume-type gp3 \ --size 50 \ --iops 5000 \ --throughput 250 \ --availability-zone us-east-1a

[!IMPORTANT] Take note of the VolumeId returned in the JSON output (e.g., vol-0123456789abcdef0), as you will need it for the checkpoint and teardown steps.

Console alternative
  1. Navigate to the EC2 Console.
  2. In the left navigation pane, under Elastic Block Store, select Volumes.
  3. Click Create volume.
  4. For Volume Type, select General Purpose SSD (gp3).
  5. Set Size to 50 GiB.
  6. Change IOPS from the baseline of 3,000 to 5,000.
  7. Change Throughput from the baseline of 125 to 250.
  8. Select us-east-1a for Availability Zone.
  9. Click Create volume.

📸 Screenshot: Provisioning a gp3 volume with custom IOPS and throughput.


Checkpoints

Verify that your resources have been provisioned with the correct performance configurations.

Checkpoint 1: Verify S3 Transfer Acceleration

Run the following command to ensure Transfer Acceleration is applied.

bash
aws s3api get-bucket-accelerate-configuration \ --bucket brainybee-storage-lab-<YOUR_ACCOUNT_ID>

Expected Output:

json
{ "Status": "Enabled" }

Checkpoint 2: Verify EBS gp3 Performance Metrics

Run the following command to verify your block storage configuration. Replace <YOUR_VOLUME_ID> with the ID recorded in Step 3.

bash
aws ec2 describe-volumes \ --volume-ids <YOUR_VOLUME_ID> \ --query "Volumes[*].{ID:VolumeId, Type:VolumeType, IOPS:Iops, Throughput:Throughput}" \ --output table

Expected Output: You should see a table confirming gp3, 5000 IOPS, and 250 Throughput.


Clean-Up / Teardown

[!WARNING] Remember to run the teardown commands to avoid ongoing charges. EBS volumes, even when unattached, accrue hourly charges based on provisioned storage, IOPS, and throughput.

1. Delete the EBS Volume

Ensure the volume is in the available state (not attached to an EC2 instance), then delete it:

bash
aws ec2 delete-volume \ --volume-id <YOUR_VOLUME_ID>

2. Delete the S3 Bucket

S3 buckets must be empty before they can be deleted. Since we did not upload any files, we can safely delete the bucket:

bash
aws s3 rb s3://brainybee-storage-lab-<YOUR_ACCOUNT_ID> --force
Console alternative
  1. EBS: Go to EC2 -> Volumes, select your volume, click Actions -> Delete volume, and confirm.
  2. S3: Go to S3 -> Buckets, select your bucket, click Delete, type the bucket name to confirm, and click Delete bucket.

Troubleshooting

Common Error / IssueCauseSolution
BucketAlreadyExistsAnother AWS customer has already taken this exact S3 bucket name.Change the suffix <YOUR_ACCOUNT_ID> to a different random string and try again.
InvalidParameterValue (EBS)You specified IOPS or Throughput values outside the allowed ratio for gp3 sizes.gp3 allows up to 500 IOPS per GiB. Make sure your values adhere to AWS Quotas (e.g., 50GB maxes at 25,000 IOPS).
InvalidZone.NotFoundThe Availability Zone us-east-1a is not valid for your default region.Ensure your AWS CLI is configured for us-east-1, or change the AZ in the command to match your current region (e.g., us-west-2a).
MethodNotAllowed (S3)Attempting to use a Transfer Acceleration endpoint on a bucket with the feature disabled.Verify Step 2 completed successfully using the Checkpoint 1 command.

Ready to study AWS Certified Solutions Architect - Associate (SAA-C03)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free