Lab: Modernizing Monolithic Workloads using Serverless Decoupling
Determine opportunities for modernization and enhancements
Lab: Modernizing Monolithic Workloads using Serverless Decoupling
[!WARNING] Remember to run the teardown commands at the end of this lab to avoid ongoing charges to your AWS account. This lab is designed to fit within the AWS Free Tier where possible.
This lab guides you through the process of identifying a modernization opportunity within a legacy application and refactoring a specific component—file metadata processing—into a decoupled, serverless architecture.
Prerequisites
- An active AWS Account.
- AWS CLI installed and configured with Administrator privileges.
- Local terminal or shell (bash, zsh, or PowerShell).
- Basic familiarity with Python and JSON.
- IAM permissions to create S3 buckets, Lambda functions, and DynamoDB tables.
Learning Objectives
- Identify modernization opportunities for legacy monolithic code.
- Implement a decoupled architecture using Amazon S3 event notifications.
- Configure a serverless compute function (AWS Lambda) to process data.
- Utilize a purpose-built NoSQL database (Amazon DynamoDB) for metadata storage.
Architecture Overview
In the legacy version, a single EC2 instance would poll a folder, process a file, and write to a local SQL database. In this modernized architecture, we decouple these steps:
Step-by-Step Instructions
Step 1: Create the Purpose-Built Database
We will replace the legacy relational table with a DynamoDB table designed for high-scale metadata storage.
aws dynamodb create-table \
--table-name modernized-metadata-store \
--attribute-definitions AttributeName=FileID,AttributeType=S \
--key-schema AttributeName=FileID,KeyType=HASH \
--billing-mode PAY_PER_REQUEST▶Console alternative
- Navigate to DynamoDB > Tables > Create table.
- Table name:
modernized-metadata-store. - Partition key:
FileID(String). - Table settings: Default (Pay-per-request).
- Click Create table.
Step 2: Create the S3 Modernization Bucket
The bucket will serve as our decoupled entry point, replacing local disk storage.
# Replace <YOUR_UNIQUE_ID> with a random string
aws s3 mb s3://modernization-lab-source-<YOUR_UNIQUE_ID>▶Console alternative
- Navigate to S3 > Create bucket.
- Bucket name:
modernization-lab-source-<unique-id>. - Keep defaults and click Create bucket.
Step 3: Create the IAM Execution Role
Lambda needs permission to read from S3 and write to DynamoDB.
# 1. Create trust policy file
echo '{"Version": "2012-10-17","Statement": [{"Effect": "Allow","Principal": {"Service": "lambda.amazonaws.com"},"Action": "sts:AssumeRole"}]}' > trust-policy.json
# 2. Create the role
aws iam create-role --role-name LambdaModernizationRole --assume-role-policy-document file://trust-policy.json
# 3. Attach AWS Managed Policy for basic execution
aws iam attach-role-policy --role-name LambdaModernizationRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole[!IMPORTANT] In a production environment, you should use the Principle of Least Privilege. For this lab, ensure you also grant
s3:GetObjectanddynamodb:PutItempermissions to this role.
Step 4: Deploy the Serverless Logic
Create a lambda_function.py that extracts file metadata and stores it.
import json, boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('modernized-metadata-store')
def lambda_handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
size = event['Records'][0]['s3']['object']['size']
table.put_item(Item={'FileID': key, 'Bucket': bucket, 'Size': size})
return {'statusCode': 200, 'body': json.dumps('Metadata Recorded')}zip function.zip lambda_function.py
aws lambda create-function --function-name FileProcessor \
--zip-file fileb://function.zip --handler lambda_function.lambda_handler \
--runtime python3.9 --role arn:aws:iam::<YOUR_ACCOUNT_ID>:role/LambdaModernizationRoleStep 5: Configure the Event Trigger
Connect S3 to Lambda to enable event-driven decoupling.
aws lambda add-permission --function-name FileProcessor --statement-id s3-invoke \
--action "lambda:InvokeFunction" --principal s3.amazonaws.com \
--source-arn arn:aws:s3:::modernization-lab-source-<YOUR_UNIQUE_ID>
# Note: S3 notification configuration usually requires a JSON config file
aws s3api put-bucket-notification-configuration --bucket modernization-lab-source-<YOUR_UNIQUE_ID> \
--notification-configuration '{"LambdaFunctionConfigurations": [{"LambdaFunctionArn": "arn:aws:lambda:<REGION>:<ACCOUNT>:function:FileProcessor", "Events": ["s3:ObjectCreated:*"]}]}'Checkpoints
- Upload Test: Run
aws s3 cp test.txt s3://modernization-lab-source-<YOUR_UNIQUE_ID>/. - Execution Check: Check CloudWatch Logs for the
FileProcessorlog group. You should see a successful execution. - Data Verification: Run
aws dynamodb scan --table-name modernized-metadata-store. You should see an entry fortest.txt.
Teardown
# 1. Empty and Delete S3 Bucket
aws s3 rm s3://modernization-lab-source-<YOUR_UNIQUE_ID> --recursive
aws s3 rb s3://modernization-lab-source-<YOUR_UNIQUE_ID>
# 2. Delete Lambda
aws lambda delete-function --function-name FileProcessor
# 3. Delete DynamoDB Table
aws dynamodb delete-table --table-name modernized-metadata-store
# 4. Delete IAM Role
aws iam detach-role-policy --role-name LambdaModernizationRole --policy-arn arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
aws iam delete-role --role-name LambdaModernizationRoleTroubleshooting
| Problem | Potential Cause | Fix |
|---|---|---|
| Lambda not triggered | Circular permissions | Ensure lambda:InvokeFunction permission is added to the Lambda policy for the S3 principal. |
| DynamoDB Access Denied | IAM Role missing permissions | Add a policy to LambdaModernizationRole allowing dynamodb:PutItem on the specific table ARN. |
| S3 Bucket Name Error | Global namespace conflict | Bucket names must be unique globally. Add more random characters to your bucket name. |
Stretch Challenge
Modernize with EventBridge: Instead of a direct S3-to-Lambda trigger, enable Amazon EventBridge on the S3 bucket. Configure an EventBridge Rule to route the event to Lambda. This allows multiple consumers (e.g., a logging function AND the processing function) to receive the same event without modifying the S3 configuration.
Cost Estimate
- Amazon S3: $0.00 (Free Tier includes 5GB and 20,000 GET/2,000 PUT requests).
- AWS Lambda: $0.00 (Free Tier includes 1M requests/month).
- Amazon DynamoDB: $0.00 (On-demand mode is minimal for this lab; Free Tier includes 25GB).
- Total: ~$0.00 for standard lab usage.
Concept Review
Modernization Strategies (The 7Rs)
\begin{tikzpicture}[node distance=2cm] \draw[thick, ->] (0,0) -- (2,0) node[midway, above] {Rehost} (2,0) node[right] {Lift and Shift (EC2)}; \draw[thick, ->] (0,-1) -- (2,-1) node[midway, above] {Replatform} (2,-1) node[right] {Managed Services (RDS/Beanstalk)}; \draw[thick, ->] (0,-2) -- (2,-2) node[midway, above] {Refactor} (2,-2) node[right] {Serverless/Containers (Lambda/Fargate)}; \draw[dashed] (-0.5, 0.5) rectangle (6, -2.5); \node at (2.75, 0.5) {Modernization Depth}; \end{tikzpicture}
| Feature | Legacy (Monolith) | Modernized (Serverless) |
|---|---|---|
| Scalability | Manual/Auto-scaling Groups | Inherently Scalable (Per-request) |
| Cost Model | Pay for Idle (Always-on instances) | Pay for Value (Execution time) |
| Maintenance | OS Patching & Middleware | No Infrastructure Management |
| Coupling | Tight (Internal method calls) | Loose (Event-driven asynchronous) |