Hands-On Lab: Building GenAI Applications with Amazon Bedrock
AWS infrastructure and technologies for building GenAI applications
Prerequisites
Before starting this lab, ensure you have the following ready:
- AWS Account: An active AWS account with billing enabled.
- IAM Permissions: An IAM User or Role with
AmazonBedrockFullAccessandAmazonS3FullAccess. - CLI Tools: The AWS CLI (
aws) installed and configured viaaws configure. - Model Access: Amazon Bedrock requires explicit opt-in for Foundation Models.
- Navigate to AWS Management Console > Amazon Bedrock > Model access.
- Click Manage model access and request access to Amazon Titan Text Lite and Anthropic Claude 3 Haiku.
- Wait for the access status to change to Granted before proceeding.
Learning Objectives
By completing this 30-minute guided lab, you will be able to:
- Identify and list available Foundation Models (FMs) using the AWS CLI.
- Provision supporting AWS infrastructure (Amazon S3) to store GenAI operational logs.
- Invoke Bedrock FMs programmatically to generate text.
- Evaluate the cost and performance tradeoffs of different managed GenAI models.
Architecture Overview
This lab utilizes Amazon Bedrock's serverless architecture to abstract the underlying infrastructure required to run Foundation Models. You will interact with a single API to access multiple different models.
Step-by-Step Instructions
Step 1: Create an S3 Bucket for Prompt Logging
To ensure our GenAI application is observable and compliant, we will create an S3 bucket to store our model invocations and outputs.
aws s3 mb s3://brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID> --region <YOUR_REGION>[!TIP] S3 bucket names must be globally unique. Replace
<YOUR_ACCOUNT_ID>with your 12-digit AWS account number and<YOUR_REGION>with your current region (e.g.,us-east-1).
▶Console alternative
- Navigate to Amazon S3 in the AWS Management Console.
- Click Create bucket.
- Enter
brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID>as the bucket name. - Select your preferred Region.
- Leave all other settings as default and click Create bucket.
📸 Screenshot: Terminal showing
make_bucket: brainybee-lab-bedrock-logs-...success message.
Step 2: List Available Foundation Models
Amazon Bedrock provides various models tailored for different tasks (text, image, embeddings). Let's query the API to see which Amazon-provider models are available in your region.
aws bedrock list-foundation-models \
--by-provider Amazon \
--query "modelSummaries[*].modelId" \
--region <YOUR_REGION>[!NOTE] Notice the varying capabilities and model versions. Selecting the right model involves evaluating factors like cost, modality, latency, and input/output length constraints.
▶Console alternative
- Navigate to Amazon Bedrock.
- In the left navigation pane, under Foundation models, select Base models.
- Filter by the Amazon provider to view available Titan models.
Step 3: Prepare the Inference Payload
To avoid complex string escaping in the CLI, we will create a JSON file containing our prompt payload. This prompt is designed to test the model's ability to explain GenAI business benefits.
echo '{
"inputText": "Explain three advantages of using AWS GenAI services for a business in simple terms.",
"textGenerationConfig": {
"maxTokenCount": 200,
"temperature": 0.5,
"topP": 0.9
}
}' > prompt.json[!TIP] The
temperatureparameter controls the creativity of the model. A value of0.5provides a balance between deterministic, accurate responses and creative variance.
Step 4: Invoke the Amazon Titan Model
Now, send the payload to Amazon Titan Text Lite. This model is highly cost-effective and optimized for standard text generation tasks.
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body file://prompt.json \
--cli-binary-format raw-in-base64-out \
--accept "application/json" \
--content-type "application/json" \
--region <YOUR_REGION> \
titan_output.json▶Console alternative
- In Amazon Bedrock, navigate to Playgrounds > Text.
- Click Select model and choose Amazon > Titan Text G1 - Lite.
- In the text box, type: "Explain three advantages of using AWS GenAI services for a business in simple terms."
- Click Run.
📸 Screenshot: The API returns an empty prompt but successfully creates
titan_output.json.
Step 5: Review the Model Output
Read the generated file to view the AI's response and the token usage statistics. Token usage directly influences the cost tradeoffs of AWS GenAI services.
cat titan_output.json[!IMPORTANT] Look closely at the response. You should see an array of text, followed by metadata. The metadata reveals how many input tokens were processed and how many output tokens were generated.
Checkpoints
Verify your progress by running the following validation command:
Checkpoint 1: Verify Payload Execution
cat titan_output.json | grep -o '"inputTextTokenCount":[0-9]*'Expected Result: "inputTextTokenCount":<number> (The number should be around 15-25 tokens depending on the exact prompt parsing).
Checkpoint 2: S3 Bucket Creation
aws s3 ls | grep brainybee-lab-bedrock-logsExpected Result: A timestamp followed by your unique bucket name.
Teardown / Clean-Up
[!WARNING] Remember to run the teardown commands to avoid ongoing charges. While Bedrock text generation is pay-per-request, S3 buckets incur minimal storage charges. If you provisioned dedicated throughput, you MUST delete it to avoid high hourly costs.
Execute the following commands to delete the resources created in this lab:
# 1. Delete the local JSON files
rm prompt.json titan_output.json
# 2. Delete the S3 bucket
aws s3 rb s3://brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID> --forceTroubleshooting
| Common Error | Cause | Fix |
|---|---|---|
AccessDeniedException | You have not requested model access in the Bedrock console. | Go to Bedrock > Model access, click Manage model access, check the models, and save. |
ValidationException | The payload structure in prompt.json does not match the specific model's requirements. | Different providers (Anthropic vs Amazon) require different JSON structures. Review the Bedrock API docs for the specific model-id. |
UnrecognizedClientException | AWS CLI is not configured or credentials expired. | Run aws configure to re-authenticate with valid Access Keys. |
InvalidBucketName | S3 bucket name contains uppercase letters or invalid characters. | Ensure <YOUR_ACCOUNT_ID> is purely numeric and the name is all lowercase. |
Concept Review
AWS provides multiple entry points to build Generative AI applications depending on your team's machine learning expertise and desired level of control.
Generative AI Service Comparison
| Service | Barrier to Entry | Customization | Primary Use Case |
|---|---|---|---|
| Amazon Q | Low | Minimal (RAG) | Ready-to-use AI assistant for business tasks and coding. |
| Amazon Bedrock | Medium | High (Fine-tuning, RAG) | Serverless API access to leading Foundation Models. |
| SageMaker JumpStart | High | Maximum (Full weights) | Hosting open-source models with full control over the ML pipeline. |
The Foundation Model Lifecycle
When transitioning from simply consuming an API to actively customizing a model, understanding the ML development lifecycle is crucial. The pipeline below illustrates the journey of an AI model from raw data to a deployed application.
By utilizing Amazon Bedrock, AWS handles the heavy lifting of the Pre-training and Deployment phases, allowing developers to focus purely on Model Selection, Fine-Tuning (via Bedrock Custom Models), and integrating the Feedback Loop into their applications.