Hands-On Lab912 words

Hands-On Lab: Building GenAI Applications with Amazon Bedrock

AWS infrastructure and technologies for building GenAI applications

Prerequisites

Before starting this lab, ensure you have the following ready:

  • AWS Account: An active AWS account with billing enabled.
  • IAM Permissions: An IAM User or Role with AmazonBedrockFullAccess and AmazonS3FullAccess.
  • CLI Tools: The AWS CLI (aws) installed and configured via aws configure.
  • Model Access: Amazon Bedrock requires explicit opt-in for Foundation Models.
    • Navigate to AWS Management Console > Amazon Bedrock > Model access.
    • Click Manage model access and request access to Amazon Titan Text Lite and Anthropic Claude 3 Haiku.
    • Wait for the access status to change to Granted before proceeding.

Learning Objectives

By completing this 30-minute guided lab, you will be able to:

  1. Identify and list available Foundation Models (FMs) using the AWS CLI.
  2. Provision supporting AWS infrastructure (Amazon S3) to store GenAI operational logs.
  3. Invoke Bedrock FMs programmatically to generate text.
  4. Evaluate the cost and performance tradeoffs of different managed GenAI models.

Architecture Overview

This lab utilizes Amazon Bedrock's serverless architecture to abstract the underlying infrastructure required to run Foundation Models. You will interact with a single API to access multiple different models.

Loading Diagram...

Step-by-Step Instructions

Step 1: Create an S3 Bucket for Prompt Logging

To ensure our GenAI application is observable and compliant, we will create an S3 bucket to store our model invocations and outputs.

bash
aws s3 mb s3://brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID> --region <YOUR_REGION>

[!TIP] S3 bucket names must be globally unique. Replace <YOUR_ACCOUNT_ID> with your 12-digit AWS account number and <YOUR_REGION> with your current region (e.g., us-east-1).

Console alternative
  1. Navigate to Amazon S3 in the AWS Management Console.
  2. Click Create bucket.
  3. Enter brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID> as the bucket name.
  4. Select your preferred Region.
  5. Leave all other settings as default and click Create bucket.

📸 Screenshot: Terminal showing make_bucket: brainybee-lab-bedrock-logs-... success message.

Step 2: List Available Foundation Models

Amazon Bedrock provides various models tailored for different tasks (text, image, embeddings). Let's query the API to see which Amazon-provider models are available in your region.

bash
aws bedrock list-foundation-models \ --by-provider Amazon \ --query "modelSummaries[*].modelId" \ --region <YOUR_REGION>

[!NOTE] Notice the varying capabilities and model versions. Selecting the right model involves evaluating factors like cost, modality, latency, and input/output length constraints.

Console alternative
  1. Navigate to Amazon Bedrock.
  2. In the left navigation pane, under Foundation models, select Base models.
  3. Filter by the Amazon provider to view available Titan models.

Step 3: Prepare the Inference Payload

To avoid complex string escaping in the CLI, we will create a JSON file containing our prompt payload. This prompt is designed to test the model's ability to explain GenAI business benefits.

bash
echo '{ "inputText": "Explain three advantages of using AWS GenAI services for a business in simple terms.", "textGenerationConfig": { "maxTokenCount": 200, "temperature": 0.5, "topP": 0.9 } }' > prompt.json

[!TIP] The temperature parameter controls the creativity of the model. A value of 0.5 provides a balance between deterministic, accurate responses and creative variance.

Step 4: Invoke the Amazon Titan Model

Now, send the payload to Amazon Titan Text Lite. This model is highly cost-effective and optimized for standard text generation tasks.

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body file://prompt.json \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ --region <YOUR_REGION> \ titan_output.json
Console alternative
  1. In Amazon Bedrock, navigate to Playgrounds > Text.
  2. Click Select model and choose Amazon > Titan Text G1 - Lite.
  3. In the text box, type: "Explain three advantages of using AWS GenAI services for a business in simple terms."
  4. Click Run.

📸 Screenshot: The API returns an empty prompt but successfully creates titan_output.json.

Step 5: Review the Model Output

Read the generated file to view the AI's response and the token usage statistics. Token usage directly influences the cost tradeoffs of AWS GenAI services.

bash
cat titan_output.json

[!IMPORTANT] Look closely at the response. You should see an array of text, followed by metadata. The metadata reveals how many input tokens were processed and how many output tokens were generated.

Checkpoints

Verify your progress by running the following validation command:

Checkpoint 1: Verify Payload Execution

bash
cat titan_output.json | grep -o '"inputTextTokenCount":[0-9]*'

Expected Result: "inputTextTokenCount":<number> (The number should be around 15-25 tokens depending on the exact prompt parsing).

Checkpoint 2: S3 Bucket Creation

bash
aws s3 ls | grep brainybee-lab-bedrock-logs

Expected Result: A timestamp followed by your unique bucket name.

Teardown / Clean-Up

[!WARNING] Remember to run the teardown commands to avoid ongoing charges. While Bedrock text generation is pay-per-request, S3 buckets incur minimal storage charges. If you provisioned dedicated throughput, you MUST delete it to avoid high hourly costs.

Execute the following commands to delete the resources created in this lab:

bash
# 1. Delete the local JSON files rm prompt.json titan_output.json # 2. Delete the S3 bucket aws s3 rb s3://brainybee-lab-bedrock-logs-<YOUR_ACCOUNT_ID> --force

Troubleshooting

Common ErrorCauseFix
AccessDeniedExceptionYou have not requested model access in the Bedrock console.Go to Bedrock > Model access, click Manage model access, check the models, and save.
ValidationExceptionThe payload structure in prompt.json does not match the specific model's requirements.Different providers (Anthropic vs Amazon) require different JSON structures. Review the Bedrock API docs for the specific model-id.
UnrecognizedClientExceptionAWS CLI is not configured or credentials expired.Run aws configure to re-authenticate with valid Access Keys.
InvalidBucketNameS3 bucket name contains uppercase letters or invalid characters.Ensure <YOUR_ACCOUNT_ID> is purely numeric and the name is all lowercase.

Concept Review

AWS provides multiple entry points to build Generative AI applications depending on your team's machine learning expertise and desired level of control.

Generative AI Service Comparison

ServiceBarrier to EntryCustomizationPrimary Use Case
Amazon QLowMinimal (RAG)Ready-to-use AI assistant for business tasks and coding.
Amazon BedrockMediumHigh (Fine-tuning, RAG)Serverless API access to leading Foundation Models.
SageMaker JumpStartHighMaximum (Full weights)Hosting open-source models with full control over the ML pipeline.

The Foundation Model Lifecycle

When transitioning from simply consuming an API to actively customizing a model, understanding the ML development lifecycle is crucial. The pipeline below illustrates the journey of an AI model from raw data to a deployed application.

Loading Diagram...

By utilizing Amazon Bedrock, AWS handles the heavy lifting of the Pre-training and Deployment phases, allowing developers to focus purely on Model Selection, Fine-Tuning (via Bedrock Custom Models), and integrating the Feedback Loop into their applications.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free