Hands-On Lab1,215 words

Hands-On Lab: Exploring the AWS Shared Responsibility Model

AWS Shared Responsibility Model

Hands-On Lab: Exploring the AWS Shared Responsibility Model

Estimated Time: 30 minutes | Difficulty: Guided | Cloud Provider: AWS

Welcome to this Hands-On Lab on the AWS Shared Responsibility Model. While security models are highly conceptual, interacting with different AWS services helps solidify exactly what you (the customer) are responsible for configuring, versus what AWS manages behind the scenes.

In this lab, you will deploy an Amazon S3 bucket (a fully managed service) and an Amazon EC2 instance (Infrastructure as a Service). You will witness firsthand the difference between "Security OF the Cloud" (AWS responsibilities like physical facilities and hypervisors) and "Security IN the Cloud" (Customer responsibilities like guest operating systems and firewall configurations).


Prerequisites

Before starting this lab, ensure you have the following ready:

  • AWS Account: An active AWS account.
  • IAM Permissions: An IAM user or role with AdministratorAccess or specific permissions for EC2 and S3.
  • CLI Environment: The AWS CLI installed and configured (aws configure) with valid access keys.
  • Terminal/Command Prompt: Access to a bash or PowerShell terminal.
  • Prior Knowledge: Basic familiarity with cloud computing concepts.

Learning Objectives

By completing this lab, you will be able to:

  1. Differentiate between AWS responsibilities and customer responsibilities in a real-world deployment.
  2. Configure a Security Group to understand your role in managing network firewalls.
  3. Deploy managed vs. unmanaged services to see how the responsibility boundary shifts (S3 vs. EC2).

Architecture Overview

The following diagram illustrates the resources you will build and who is responsible for their security layers.

Loading Diagram...

The Shared Responsibility Stack

Here is a conceptual view of how the responsibilities stack up:

Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds

Step-by-Step Instructions

Step 1: Create a Managed Service Resource (Amazon S3)

First, we will create an Amazon S3 bucket. Because S3 is a managed service, AWS handles the operating system, storage drives, and patching. Your responsibility is configuring the data and who has access to it.

bash
# Generate a unique bucket name using your account ID to avoid naming collisions aws s3api create-bucket \ --bucket brainybee-lab-srm-<YOUR_ACCOUNT_ID> \ --region us-east-1

[!TIP] Replace <YOUR_ACCOUNT_ID> with your actual 12-digit AWS Account ID, or any unique string like your name and today's date.

Console alternative
  1. Log in to the AWS Management Console.
  2. Search for and navigate to S3.
  3. Click Create bucket.
  4. Enter a globally unique bucket name (e.g., brainybee-lab-srm-123456).
  5. Leave all other settings as default and click Create bucket.

📸 Screenshot: The "Create bucket" form in the S3 console.

Step 2: Configure a Network Firewall (Security Group)

For services like Amazon EC2, AWS secures the physical network, but you are responsible for the virtual firewall configurations that allow traffic in and out of your specific instance.

bash
# Create the Security Group aws ec2 create-security-group \ --group-name lab-srm-sg \ --description "Security Group demonstrating customer firewall responsibility" # Open Port 22 (SSH) to the internet aws ec2 authorize-security-group-ingress \ --group-name lab-srm-sg \ --protocol tcp \ --port 22 \ --cidr 0.0.0.0/0

[!WARNING] In a production environment, opening Port 22 to 0.0.0.0/0 (the entire internet) is a severe security risk. This violates the customer's responsibility to properly restrict access. We are doing it here temporarily for demonstration.

Console alternative
  1. Navigate to EC2 in the AWS Console.
  2. In the left sidebar, under Network & Security, click Security Groups.
  3. Click Create security group.
  4. Name it lab-srm-sg and add an Inbound Rule for SSH (Port 22) with the source set to Anywhere-IPv4 (0.0.0.0/0).
  5. Click Create security group.

📸 Screenshot: Adding an inbound SSH rule in the Security Groups dashboard.

Step 3: Launch an Unmanaged Compute Resource (Amazon EC2)

Next, you will deploy an EC2 instance. This represents Infrastructure as a Service (IaaS). Once AWS provides the virtual machine, you are 100% responsible for patching the guest operating system (Amazon Linux, Windows, etc.) and updating your installed applications.

bash
# Launch an Amazon Linux 2 t2.micro instance using the SG you created aws ec2 run-instances \ --image-id resolve:ssm:/aws/service/ami-amazon-linux-latest/amzn2-ami-hvm-x86_64-gp2 \ --instance-type t2.micro \ --security-groups lab-srm-sg \ --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=Lab-SRM-Instance}]'
Console alternative
  1. Navigate to EC2 and click Launch instance.
  2. Name the instance Lab-SRM-Instance.
  3. Under OS, select Amazon Linux 2023 (or Amazon Linux 2).
  4. Select t2.micro for Instance type.
  5. Under Network Settings, choose Select existing security group and pick lab-srm-sg.
  6. Click Launch instance.

📸 Screenshot: The EC2 Launch Instance summary panel.

Step 4: Analyze the Responsibilities

Now that your resources are running, review what you have deployed:

  1. The Physical Server: AWS is actively securing the data center in us-east-1 where your EC2 instance physically lives. You do not have access to this, nor do you need to hire physical security guards.
  2. The Hypervisor: AWS manages the hypervisor layer that isolates your VM from other customers.
  3. The OS (EC2): You are responsible for logging into the EC2 instance and running sudo yum update to apply security patches.
  4. The OS (S3): AWS patches the underlying storage servers powering your S3 bucket. You only manage the Bucket Policies.

Checkpoints

Verify that your resources have been provisioned successfully.

Checkpoint 1: Verify the S3 Bucket

bash
aws s3 ls | grep brainybee-lab-srm

Expected Result: You should see your bucket listed along with its creation timestamp.

Checkpoint 2: Verify the EC2 Instance State

bash
aws ec2 describe-instances \ --filters "Name=tag:Name,Values=Lab-SRM-Instance" \ --query "Reservations[*].Instances[*].{ID:InstanceId,State:State.Name}" \ --output table

Expected Result: A table displaying your Instance ID and a state of running.


Clean-Up / Teardown

[!WARNING] Remember to run the teardown commands to avoid ongoing charges. If you leave an EC2 instance running outside the Free Tier, it will incur hourly costs.

Follow these exact steps to destroy all resources provisioned in this lab.

1. Terminate the EC2 Instance

First, grab the Instance ID from your checkpoint step, then terminate it:

bash
# Replace <INSTANCE_ID> with your actual ID (e.g., i-0abcd1234efgh5678) aws ec2 terminate-instances --instance-ids <INSTANCE_ID>

(Wait about 2-3 minutes for the instance to fully terminate before proceeding to delete the Security Group).

2. Delete the Security Group

bash
aws ec2 delete-security-group --group-name lab-srm-sg

3. Delete the S3 Bucket

bash
aws s3 rb s3://brainybee-lab-srm-<YOUR_ACCOUNT_ID> --force

Troubleshooting

Common ErrorLikely CauseSolution
BucketAlreadyExistsBucket names must be globally unique across all AWS accounts.Add more random characters to the end of your bucket name in Step 1.
DependencyViolation when deleting SGYou tried to delete the Security Group while the EC2 instance was still running or shutting down.Wait for the EC2 instance state to become terminated, then run the SG deletion command again.
UnauthorizedOperationYour IAM user lacks permissions to run EC2 or S3 commands.Contact your AWS administrator to grant AmazonEC2FullAccess and AmazonS3FullAccess.

Cost Estimate

  • Amazon S3: Empty buckets incur $0.00 in storage fees.
  • Amazon EC2: A t2.micro instance is eligible for the AWS Free Tier (up to 750 hours/month). If you are outside the Free Tier, running it for this 30-minute lab will cost less than $0.01.
  • Total Estimated Spend: ~$0.00

Concept Review

As seen in this lab, the AWS Shared Responsibility Model fundamentally dictates who does what. The line between your responsibility and AWS's shifts depending on the service model:

Service TypeExampleAWS ResponsibilityCustomer Responsibility
Infrastructure as a Service (IaaS)Amazon EC2Physical hardware, networking infrastructure, virtualization hypervisor.Guest OS (updates/patches), application software, firewall config (Security Groups), data encryption.
Platform as a Service (PaaS)Amazon RDSHardware, hypervisor, OS installation, automated database backups, database engine patching.Database settings, logical access controls, application optimization.
Software as a Service (SaaS)Amazon S3Everything from hardware up to the application logic and availability.IAM policies, bucket policies, enabling encryption, object versioning.

By successfully launching an EC2 instance and an S3 bucket, and then managing the firewall via a Security Group, you have successfully performed tasks on the Customer side ("Security IN the cloud") while relying on AWS for the physical provisioning ("Security OF the cloud").

Ready to study AWS Certified Cloud Practitioner (CLF-C02)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free