Hands-On Lab948 words

Hands-On Lab: Getting Started with AWS GenAI Infrastructure using Amazon Bedrock

AWS infrastructure and technologies for building GenAI applications

Hands-On Lab: Getting Started with AWS GenAI Infrastructure using Amazon Bedrock

Welcome to this guided hands-on lab. As covered in the AWS Certified AI Practitioner (AIF-C01) exam guide, understanding AWS infrastructure and technologies for building Generative AI (GenAI) applications is critical. In this lab, you will interact with Amazon Bedrock, AWS's fully managed service that offers a choice of high-performing Foundation Models (FMs) via a single API.

Prerequisites

Before you begin, ensure you have the following:

  • An active AWS Account with administrator or power-user IAM permissions.
  • AWS CLI installed and configured locally (aws configure) with valid access keys.
  • Basic familiarity with terminal/command-line operations.
  • A selected AWS Region that supports Amazon Bedrock (e.g., us-east-1 or us-west-2).

[!IMPORTANT] Amazon Bedrock models are not enabled by default. You must request access to the models in the AWS console before you can invoke them via the CLI.

Learning Objectives

Upon completing this lab, you will be able to:

  1. Identify AWS services and features used to develop GenAI applications.
  2. Request access to Foundation Models within Amazon Bedrock.
  3. Invoke a Generative AI model using the AWS CLI and the AWS Management Console.
  4. Describe the cost trade-offs between on-demand (token-based) pricing and provisioned throughput.

Architecture Overview

The following diagram illustrates the simple architecture we will be using. You will act as the user/developer interacting with the Amazon Bedrock API through both the Command Line Interface (CLI) and the web console.

Loading Diagram...

Step-by-Step Instructions

Step 1: Request Model Access in Amazon Bedrock

Before you can use any model in Amazon Bedrock, you must explicitly request access to it. This ensures you agree to the End User License Agreements (EULA) of the model providers.

bash
# While this is a console task, you can verify your current access via CLI: aws bedrock list-foundation-models --by-provider Amazon --query "modelSummaries[*].[modelId, modelLifecycle.status]"
Console alternative: Requesting Access
  1. Log into the AWS Management Console and navigate to Amazon Bedrock.
  2. In the left navigation pane, scroll down and select Model access.
  3. Click the Manage model access button in the top right.
  4. Check the box next to Titan Text G1 - Lite (or another available Amazon Titan model).
  5. Click Save changes at the bottom of the page.
  6. Wait a few moments until the Access status changes to Access granted.

📸 Screenshot: Look for the green "Access granted" badge next to the requested model.

Step 2: Prepare Your GenAI Prompt Payload

Amazon Bedrock requires a specific JSON payload depending on the model you invoke. For Amazon Titan Text, the payload structure requires an inputText field.

Create a file named prompt.json in your current directory:

bash
echo '{ "inputText": "Explain the business benefits of Generative AI in two short sentences.", "textGenerationConfig": { "maxTokenCount": 100, "temperature": 0.5, "topP": 0.9 } }' > prompt.json

[!TIP] Temperature controls the randomness of the output. A value of 0 is highly deterministic, while a value closer to 1 is more creative.

Step 3: Invoke the Foundation Model via AWS CLI

Now, use the aws bedrock-runtime command to send your prompt to the Amazon Titan model.

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body file://prompt.json \ --cli-binary-format raw-in-base64-out \ --accept application/json \ --content-type application/json \ response.json

Read the output generated by the model:

bash
cat response.json
Console alternative: Bedrock Text Playground
  1. In the Amazon Bedrock console, navigate to Playgrounds > Text in the left menu.
  2. Click Select model and choose Amazon > Titan Text G1 - Lite.
  3. In the chat box, type: "Explain the business benefits of Generative AI in two short sentences."
  4. Click Run.
  5. Observe the model's response directly in the UI.

Step 4: Analyze Cost Trade-offs (Token-Based vs Provisioned)

As an AI Practitioner, you must understand cost trade-offs. Amazon Bedrock offers two main billing models:

  1. On-Demand (Token-Based): You pay for every token sent to (input) and generated by (output) the model. Best for unpredictable, bursty, or low-volume workloads.
  2. Provisioned Throughput: You purchase a dedicated block of computing power for a specific model over a term. Best for consistent, high-volume workloads.
Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds

Checkpoints

Verify your progress after Step 3:

  • Check: Did response.json generate successfully?
  • Action: Run cat response.json.
  • Expected Result: You should see a JSON block containing a "outputText" field with the generated two-sentence response, followed by token usage statistics.

Teardown

[!WARNING] Remember to run the teardown commands to avoid ongoing charges. For this specific lab, On-Demand model invocation only charges per token, so there is no ongoing hourly charge unless you created Provisioned Throughput or custom models.

To clean up the local files created during this lab:

bash
rm prompt.json rm response.json

If you provisioned any custom models or throughput in the console, navigate to Provisioned Throughput in the Bedrock console and delete the resource.

Troubleshooting

Common ErrorCauseFix
AccessDeniedExceptionYou have not requested access to the Amazon Titan model in the Bedrock console.Go to the Bedrock console, click Model access, and request access to the model. Wait for "Access granted".
UnrecognizedClientExceptionYour AWS CLI is not configured or your credentials have expired.Run aws configure and input valid AWS access keys and region.
ValidationExceptionThe JSON payload in prompt.json is malformed or uses the wrong schema for the specific model.Ensure you are using the inputText field exactly as shown, which is required for Amazon Titan.

Stretch Challenge

Challenge: Use a different Foundation Model, such as Anthropic Claude v2 or v3.

Constraints:

  • You must request access to the Anthropic model first.
  • You must modify your prompt.json to match the Anthropic Messages API schema (which requires "messages": [{"role": "user", "content": "..."}] instead of inputText).
Show solution
  1. Request Anthropic Claude access in the AWS Console.
  2. Create an anthropic-prompt.json:
json
{ "anthropic_version": "bedrock-2023-05-31", "max_tokens": 100, "messages": [ { "role": "user", "content": "Explain the business benefits of Generative AI in two short sentences." } ] }
  1. Invoke the model:
bash
aws bedrock-runtime invoke-model \ --model-id anthropic.claude-3-haiku-20240307-v1:0 \ --body file://anthropic-prompt.json \ --cli-binary-format raw-in-base64-out \ --accept application/json \ --content-type application/json \ claude-response.json

Cost Estimate

  • Amazon Bedrock On-Demand: Charges are incurred per 1,000 input/output tokens. A standard text prompt and response of ~150 tokens costs less than $0.01 total.
  • AWS Free Tier: Some newer accounts may have introductory credits, but Bedrock token usage is generally billed directly to the account. Ensure you do not leave automated scripts running.

Concept Review

AWS ServiceUse Case / Feature
Amazon BedrockFully managed service offering a choice of high-performing Foundation Models from leading AI companies (AI21 Labs, Anthropic, Cohere, Meta, Stability AI, Amazon) via a single API. Lower barrier to entry.
Amazon SageMaker JumpStartML hub offering pre-trained models. Better for organizations needing deep customization, fine-tuning, and direct access to model weights.
Amazon QGenerative AI-powered assistant tailored for businesses and developers to write code, answer questions, and summarize data based on enterprise repositories.
Amazon Bedrock PartyRockA fun, hands-on generative AI app-building playground designed to help users learn prompt engineering intuitively without writing code.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free