Hands-On Lab: Capabilities and Limitations of GenAI for Business Solutions
The capabilities and limitations of GenAI for solving business problems
Prerequisites
Before starting this lab, ensure you have the following requirements met:
- AWS Account: An active AWS account with billing enabled.
- IAM Permissions: Administrative or PowerUser access to interact with Amazon Bedrock.
- CLI Tools: AWS CLI (
aws) installed and configured with your credentials. - Prior Knowledge: Basic understanding of Foundation Models (FMs), prompts, and JSON formatting.
[!IMPORTANT] You must request access to the Amazon Titan Text G1 - Lite model in the Amazon Bedrock console before beginning this lab. Model access is not granted by default in new AWS accounts.
Learning Objectives
By completing this lab, you will be able to:
- Determine business value: Use GenAI to solve a real-world business problem (content summarization and insight generation).
- Identify advantages: Experience the adaptability and responsiveness of Foundation Models.
- Recognize disadvantages: Observe firsthand limitations such as nondeterminism and hallucinations.
- Manage risks: Adjust inference parameters (like temperature) and prompt structures to mitigate inaccuracies.
Architecture Overview
The following diagram illustrates the workflow you will build. You will act as a business user evaluating an AI solution, sending structured requests through the AWS CLI/Console to Amazon Bedrock, and analyzing the outputs for business utility and potential risks.
Step-by-Step Instructions
Step 1: Verify Model Access
Before invoking models, we need to ensure that you have access to the Amazon Titan Text model within Bedrock. This is an essential first step for any GenAI infrastructure setup on AWS.
aws bedrock list-foundation-models \
--by-provider amazon \
--query "modelSummaries[?modelId=='amazon.titan-text-lite-v1'].modelId" \
--output table▶Console alternative
- Navigate to the Amazon Bedrock console.
- In the left navigation pane, scroll down to Model access.
- Verify that Amazon Titan Text G1 - Lite has a status of "Access granted". If not, click Manage model access and request it.
📸 Screenshot: Model Access Granted in AWS Console
Step 2: Generating Business Value (Summarization)
One of the primary advantages of GenAI is operational efficiency through automation and summarization. We will use the model to summarize a verbose customer complaint into a concise business insight.
Create a file named prompt-summary.json with the following content:
{
"inputText": "Summarize the following customer feedback for a business report: 'I waited on hold for 45 minutes yesterday. When someone finally answered, they were polite but couldn't fix my billing issue because the system was down. I am very frustrated and considering canceling my subscription after 5 years of loyalty.'",
"textGenerationConfig": {
"maxTokenCount": 256,
"temperature": 0.0
}
}Now, invoke the model using the CLI:
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body file://prompt-summary.json \
--cli-binary-format raw-in-base64-out \
--accept "application/json" \
--content-type "application/json" \
summary-output.json
cat summary-output.json▶Console alternative
- In the Bedrock console, navigate to Playgrounds > Text.
- Select Amazon as the provider and Titan Text G1 - Lite as the model.
- Paste the input text into the prompt window.
- Set Temperature to
0.0. - Click Run.
[!TIP] Notice how we set
temperatureto0.0. This ensures a deterministic, highly factual response, which is best practice when 100% accuracy is required for business data processing.
Step 3: Observing Limitations (Hallucination)
GenAI models can produce outputs that are plausible but entirely factually incorrect (hallucinations). Let's intentionally trigger a hallucination by asking the model about a fictitious AWS service.
Create a file named prompt-hallucination.json:
{
"inputText": "Explain the key features and pricing of the AWS CloudMindReader service released in 2028.",
"textGenerationConfig": {
"maxTokenCount": 300,
"temperature": 0.9
}
}Invoke the model:
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body file://prompt-hallucination.json \
--cli-binary-format raw-in-base64-out \
--accept "application/json" \
--content-type "application/json" \
hallucination-output.json
cat hallucination-output.json▶Console alternative
- In the Text Playground, paste the new prompt asking about "AWS CloudMindReader".
- Increase the Temperature slider to
0.9. - Click Run and observe the highly detailed, yet entirely fabricated response.
Step 4: Mitigating Risks via Prompt Engineering
To manage the risk of hallucinations, we must use prompt engineering techniques. We will apply an instruction guardrail telling the model how to handle unknown information.
Create a file named prompt-mitigation.json:
{
"inputText": "Context: You are a strict technical assistant. Instruction: Explain the key features of the AWS CloudMindReader service. If the service does not exist or you do not have information about it, you must reply exactly with 'I do not have information on this service.' Do not make up facts.",
"textGenerationConfig": {
"maxTokenCount": 300,
"temperature": 0.1
}
}Run the invocation:
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body file://prompt-mitigation.json \
--cli-binary-format raw-in-base64-out \
--accept "application/json" \
--content-type "application/json" \
mitigation-output.json
cat mitigation-output.json[!NOTE] By explicitly defining the boundaries of the model's knowledge within the prompt (a technique known as negative prompting or guardrailing), we significantly reduce the risk of inaccurate business outputs.
Checkpoints
Verify your progress by running the following checks:
- Check model access: Ensure
aws bedrock list-foundation-modelsreturned theamazon.titan-text-lite-v1ID without an AccessDenied error. - Check output files: Run
ls -l *-output.json. You should see three distinct output files generated from your Bedrock inferences. - Content validation: Open
mitigation-output.json. Ensure the model obeyed the prompt constraint and refused to hallucinate the fake service.
Concept Review: Tradeoffs in GenAI
When designing GenAI solutions, businesses must balance creativity against the risk of hallucination. This is often controlled by the Temperature inference parameter.
| Capability | Business Use Case | Required Temperature | Limitation / Risk |
|---|---|---|---|
| Summarization | Processing customer feedback | Low (0.0 - 0.2) | Missing nuanced context (Chunking issues) |
| Content Creation | Drafting marketing emails | High (0.7 - 0.9) | Hallucinations, Nondeterminism |
| Classification | Routing support tickets | Low (0.0) | Interpretability (Black-box reasoning) |
Troubleshooting
| Error Message | Cause | Fix |
|---|---|---|
AccessDeniedException | Your AWS account has not been granted access to the specified Bedrock model. | Go to the Bedrock Console > Model Access and request access to the model. |
ValidationException | The JSON payload in your file is improperly formatted or contains invalid parameter names. | Check your .json files for missing commas or unescaped quotes. Ensure textGenerationConfig is spelled correctly. |
ThrottlingException | You are making too many requests to the Bedrock API simultaneously. | Wait a few seconds and try the command again. Implementing exponential backoff in production is recommended. |
Clean-Up / Teardown
[!WARNING] Amazon Bedrock charges based on the volume of tokens processed (input and output). While this lab processes very few tokens and costs fractions of a cent, it is good practice to clean up your workspace.
Because Amazon Bedrock models are accessed via an on-demand API, there are no persistent computing resources (like EC2 instances) to terminate. To clean up your local environment, simply delete the generated files:
rm prompt-*.json
rm *-output.json▶Console alternative
If you used the Bedrock Playground in the AWS Console, simply close the browser tab. The playground sessions are stateless and do not incur ongoing charges when not actively generating text.
Cost Estimate
- Amazon Bedrock (On-Demand): The Titan Text G1 - Lite model costs approximately $0.0003 per 1,000 input tokens and $0.0004 per 1,000 output tokens.
- Total Lab Cost: < $0.01 (Well within the free-tier or negligible for standard billing).