Hands-On Lab1,056 words

Hands-On Lab: Exploring GenAI Capabilities and Limitations via Amazon Bedrock

The capabilities and limitations of GenAI for solving business problems

Hands-On Lab: Exploring GenAI Capabilities and Limitations via Amazon Bedrock

Welcome to this guided hands-On lab! In this session, you will explore the real-world capabilities and limitations of Generative AI (GenAI) using Amazon Bedrock. You will leverage Foundation Models (FMs) to solve business problems, witness the risks of model hallucinations firsthand, and apply prompt engineering to mitigate nondeterminism.

Prerequisites

Before you begin, ensure you have the following:

  • An AWS Account with administrative or PowerUser IAM permissions.
  • AWS CLI installed and configured (aws configure) with your <YOUR_ACCESS_KEY> and <YOUR_SECRET_KEY>.
  • A default region selected that supports Amazon Bedrock (e.g., us-east-1 or us-west-2).
  • Basic understanding of LLMs, tokens, and temperature.

Learning Objectives

By completing this lab, you will be able to:

  1. Invoke a Foundation Model using the AWS CLI and AWS Management Console.
  2. Demonstrate the capabilities of GenAI for business automation (summarization and content creation).
  3. Identify key limitations of GenAI, specifically hallucinations and nondeterminism.
  4. Apply prompt engineering techniques to constrain output and manage business risks.

Architecture Overview

The following diagram illustrates the flow of our application. You will interact with Amazon Bedrock via the CLI/Console, sending carefully crafted prompts to an Amazon Titan Foundation Model, and evaluating the response.

Loading Diagram...

Step-by-Step Instructions

Step 1: Verify Foundation Model Access

Before invoking a model, you must ensure you have access to the Amazon Titan Text Express model within Amazon Bedrock.

bash
aws bedrock list-foundation-models \ --by-provider Amazon \ --query "modelSummaries[?modelId=='amazon.titan-text-express-v1'].modelId" \ --output text

[!TIP] If this returns amazon.titan-text-express-v1, the model is available in your region. Note that in a new AWS account, you may need to explicitly request model access via the Bedrock Console.

Console alternative
  1. Log into the AWS Management Console and navigate to Amazon Bedrock.
  2. In the left navigation pane, scroll to Model access.
  3. Click Manage model access.
  4. Check the box next to Titan Text G1 - Express and click Save changes.

📸 Screenshot: Bedrock Model Access screen showing "Access granted" next to Amazon Titan.

Step 2: Experience GenAI Capabilities (Content Creation)

GenAI excels at adaptability and responsiveness for business tasks. Let's use the model to generate a business proposal.

Create a file named capabilities-prompt.json with the following content:

json
{ "inputText": "Write a short, professional email to a client proposing a new AI automation service that will reduce their operational costs. Keep it under 100 words.", "textGenerationConfig": { "maxTokenCount": 200, "temperature": 0.7 } }

Now, invoke the model via the CLI:

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-express-v1 \ --body file://capabilities-prompt.json \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ capabilities-response.json cat capabilities-response.json
Console alternative
  1. In the Amazon Bedrock console, click on Text under the Playgrounds menu.
  2. Select Amazon as the category and Titan Text G1 - Express as the model.
  3. Paste the prompt: "Write a short, professional email to a client proposing a new AI automation service that will reduce their operational costs. Keep it under 100 words."
  4. Set the Temperature slider to 0.7 and click Run.

📸 Screenshot: Bedrock Text Playground showing the generated business email.

Step 3: Trigger a Limitation (Hallucination)

One of the most critical disadvantages of GenAI is the risk of hallucinations—where the model produces plausible but factually incorrect information. Let's deliberately trigger one by asking about a nonexistent product.

Create a file named hallucination-prompt.json:

json
{ "inputText": "Describe the key features of the AWS Graviton 9 processor released in 2028, and explain how it revolutionized space travel.", "textGenerationConfig": { "maxTokenCount": 300, "temperature": 0.9 } }

Run the invocation:

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-express-v1 \ --body file://hallucination-prompt.json \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ hallucination-response.json cat hallucination-response.json

[!WARNING] You will likely see the model confidently invent technical specifications for a product that does not exist. This "black-box" nondeterminism highlights why human-in-the-loop systems and governance are essential.

Step 4: Mitigate Risks using Prompt Engineering

To manage the risk of inaccurate outputs, you can apply prompt engineering techniques to establish guardrails. By providing explicit context and "out-of-bounds" instructions, you limit the model's latent space creativity.

Create a file named mitigation-prompt.json:

json
{ "inputText": "Context: You are a factual AWS technical assistant. Your answers must be based only on historical facts up to 2024. \nInstruction: Describe the key features of the AWS Graviton 9 processor released in 2028. If the product does not exist or the date is in the future, you must reply exactly with: 'I do not have information on this product as it is beyond my knowledge cutoff or does not exist.'", "textGenerationConfig": { "maxTokenCount": 100, "temperature": 0.0 } }

Run the mitigated invocation:

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-express-v1 \ --body file://mitigation-prompt.json \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ mitigation-response.json cat mitigation-response.json

Notice how setting the temperature to 0.0 (reducing nondeterminism) and using explicit instructions forces the model into a safe, rule-abiding response.

Checkpoints

Verify your progress after Step 4 by running the following validations:

Checkpoint 1: Validate capabilities response

bash
grep -q "results" capabilities-response.json && echo "✅ Content successfully generated"

Checkpoint 2: Validate hallucination mitigation

bash
grep -q "I do not have information" mitigation-response.json && echo "✅ Hallucination successfully mitigated"

Concept Review: Temperature vs. Hallucination Risk

The relationship between a model's temperature inference parameter and its likelihood to hallucinate is a core GenAI concept. Higher temperatures increase the probability of selecting lower-ranked tokens, boosting creativity but exponentially increasing the risk of hallucinations.

Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds
Inference ParameterBusiness Use CaseTradeoff
Low Temperature (0.0 - 0.3)Financial summaries, Code generation, RAGHighly deterministic, less adaptable
High Temperature (0.7 - 1.0)Marketing copy, Brainstorming, AI AssistantsProne to hallucinations, highly creative

Troubleshooting

Error / IssueProbable CauseFix
AccessDeniedExceptionModel access not enabled in Bedrock console.Navigate to Bedrock Console > Model Access > Enable Amazon Titan Text G1 - Express.
ValidationExceptionIncorrect JSON syntax in prompt file.Check for missing commas or unescaped quotes in your .json file.
UnrecognizedClientExceptionAWS CLI is not authenticated.Run aws configure and provide valid access keys.

Clean-Up / Teardown

[!WARNING] Amazon Bedrock charges on a pay-per-token basis (token-based pricing). Because we used managed API calls without provisioning dedicated throughput, there are no ongoing hourly charges. However, it is best practice to clean up local artifacts.

Remove the local JSON files generated during this lab:

bash
rm capabilities-prompt.json capabilities-response.json rm hallucination-prompt.json hallucination-response.json rm mitigation-prompt.json mitigation-response.json

Cost Estimate

For this lab, Amazon Bedrock uses on-demand token-based pricing.

  • Amazon Titan Text Express: ~$0.0008 per 1,000 input tokens, ~$0.0016 per 1,000 output tokens.
  • Estimated Lab Cost: < $0.01 (Well within the free tier / negligible cost for a few API calls).

Stretch Challenge

Instead of adjusting prompt context to fix a hallucination, try to implement a basic Retrieval-Augmented Generation (RAG) concept manually.

Challenge: Write a script that first queries a local text file containing a mock "company policy", extracts that text, and injects it into the inputText of your Bedrock JSON payload dynamically. Observe how injecting external, grounded data changes the model's response compared to relying solely on its internal model weights.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free