Hands-On Lab: Building and Automating with Amazon Bedrock and Amazon Q
Amazon Bedrock and Amazon Q
Hands-On Lab: Building and Automating with Amazon Bedrock and Amazon Q
Welcome to this guided lab. In this session, you will explore the powerful synergy between Amazon Bedrock (a managed service for Foundation Models) and Amazon Q Developer (a generative AI-powered assistant). You will use Amazon Q to help generate commands and solve coding challenges while directly interacting with Bedrock to test model inference parameters like Temperature and Top P.
Prerequisites
Before starting this lab, ensure you have the following:
- AWS Account: Active account with Administrator or PowerUser access.
- CLI Tools: AWS CLI v2 installed and configured (
aws configure). - IAM Permissions: Policies allowing
bedrock:InvokeModel,bedrock:ListFoundationModels, and access to Amazon Q Developer. - Prior Knowledge: Basic familiarity with terminal commands and JSON structures.
[!IMPORTANT] Amazon Bedrock models are not enabled by default. You must request access to the models in your AWS region before invoking them.
Learning Objectives
By completing this lab, you will be able to:
- Navigate and use Amazon Q Developer to generate accurate AWS CLI commands.
- Enable and query Foundation Models (FMs) using Amazon Bedrock.
- Understand and manipulate inference parameters (Temperature, Top P) to control model determinism.
- Analyze the JSON request and response payloads of Bedrock's
InvokeModelAPI.
Architecture Overview
The following diagram illustrates the flow of our lab. We will use Amazon Q as our intelligent assistant to formulate the correct API calls, which we will then send to Amazon Bedrock to interact with the Titan Foundation Model.
Step-by-Step Instructions
Step 1: Request Model Access
Before using Amazon Bedrock, you must request access to the specific Foundation Models you intend to use. For this lab, we will use Amazon Titan Text G1 - Lite.
aws bedrock list-foundation-models --by-output-modality TEXT --query "modelSummaries[?modelId=='amazon.titan-text-lite-v1']"▶Console alternative (Required for first-time setup)
- Open the AWS Management Console and navigate to Amazon Bedrock.
- In the left navigation pane, scroll down to Model access.
- Click Enable specific models (or Manage model access).
- Check the box next to Titan Text G1 - Lite under the Amazon provider.
- Click Request model access and wait for the status to change to "Access granted".
📸 Screenshot: Look for the green "Access granted" badge next to the model name.
[!TIP] Some models (like Anthropic Claude) require submitting an additional use-case justification form. Amazon Titan models are usually granted instantly.
Step 2: Use Amazon Q Developer to Generate Commands
Instead of memorizing complex CLI syntax, let's ask Amazon Q to help us figure out how to invoke the Bedrock model.
If you have the Amazon Q CLI integration installed in your terminal, or if you are using the Amazon Q chat pane in the AWS Console, type the following prompt:
"Write an AWS CLI command to invoke the amazon.titan-text-lite-v1 model in Amazon Bedrock to explain what Generative AI is. Output the response to a file called response.txt."
Amazon Q should provide an explanation and a command similar to the one we will use in the next step.
Step 3: Invoke the Foundation Model
Now, let's execute the command to query the Foundation Model. We will pass a JSON payload containing our prompt and configuration.
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--content-type application/json \
--accept application/json \
--body '{"inputText": "Explain generative AI in two sentences.", "textGenerationConfig": {"temperature": 0.1, "topP": 0.9}}' \
response.json▶Console alternative
- In the Bedrock console, go to Playgrounds > Text.
- Select Amazon as the category and Titan Text G1 - Lite as the model.
- Type "Explain generative AI in two sentences." in the prompt area.
- Click Run to see the output.
Step 4: Inspect the Output and Adjust Parameters
The output is saved to response.json. Let's inspect it.
cat response.json[!NOTE] The inference parameters we used were
temperature: 0.1andtopP: 0.9. A low temperature (closer to 0) makes the model more deterministic and focused.
Let's experiment by increasing the temperature to make the model more creative (and potentially less predictable).
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--content-type application/json \
--accept application/json \
--body '{"inputText": "Write a creative poem about cloud computing.", "textGenerationConfig": {"temperature": 0.9, "topP": 1.0}}' \
creative_response.jsonReview the new output using cat creative_response.json.
Checkpoints
Verify that you have successfully completed the core lab steps:
- Check 1: Run
aws bedrock list-foundation-models --region <YOUR_REGION> | grep titan-text-lite-v1. Does it return the model ID? - Check 2: Inspect the
response.jsonfile. It should contain a structured JSON response with aresultsarray and anoutputTextfield containing the generated text.
Visualizing the Parameter Effect
Below is a conceptual diagram illustrating how parameters like Temperature and Top-P influence the model's token selection process during inference.
Teardown
Because this lab primarily uses serverless, on-demand inference, there are no long-running EC2 instances or provisioned endpoints to delete. However, to keep your environment clean, remove the local files generated.
# Remove generated JSON files
rm response.json creative_response.json[!WARNING] If you configured Provisioned Throughput for Amazon Bedrock (not covered in this lab but possible in production), you must delete it via the console or CLI, as it incurs high hourly charges.
Troubleshooting
| Error Message | Cause | Fix |
|---|---|---|
AccessDeniedException: You don't have access to the model... | Model access has not been granted in your region. | Navigate to the Bedrock console > Model access, and request access to Titan Text G1 - Lite. |
UnrecognizedClientException: The security token included in the request is invalid. | AWS CLI credentials are not configured or expired. | Run aws configure or refresh your temporary session tokens. |
ValidationException: The provided model identifier is invalid. | Typo in the --model-id parameter. | Ensure you are using amazon.titan-text-lite-v1 exactly as written. |
Stretch Challenge
Now that you understand the CLI interactions, try automating this process with code!
Goal: Write a Python script using the boto3 library that accepts a user string, sends it to the same Titan model, and prints only the outputText string to the console (omitting the rest of the JSON wrapper).
Constraint: You must use Amazon Q Developer (either in your IDE or the console) to help you write the code.
▶Click here to reveal a solution
import boto3
import json
def invoke_titan(prompt):
client = boto3.client('bedrock-runtime', region_name='us-east-1')
payload = {
"inputText": prompt,
"textGenerationConfig": {
"temperature": 0.7,
"topP": 0.9
}
}
response = client.invoke_model(
modelId='amazon.titan-text-lite-v1',
contentType='application/json',
accept='application/json',
body=json.dumps(payload)
)
response_body = json.loads(response.get('body').read())
# Extract just the text from the Titan response structure
print(response_body['results'][0]['outputText'])
invoke_titan("What are the benefits of AWS Bedrock?")Cost Estimate
This lab is extremely cost-effective and designed to be accessible:
- Amazon Q Developer: The Free Tier allows up to 50 chat interactions per month. This lab uses 1-2 interactions.
- Amazon Bedrock: On-demand inference is charged per 1,000 input/output tokens. The Titan Text Lite model costs fractions of a cent ($0.0003 per 1K input tokens). Completing this lab will cost less than $0.01.
Concept Review
To solidify your understanding, here is a breakdown of the AWS generative AI tools mentioned in this lab and the study material:
| Service / Tool | Primary Use Case | Target Audience | Key Feature |
|---|---|---|---|
| Amazon Bedrock | Building and scaling generative AI applications. | Developers & Data Engineers | Single API access to multiple Foundation Models (Claude, Titan, Llama). |
| Amazon Q Developer | Software development, debugging, and cloud infrastructure management. | Developers & IT Professionals | IDE integrations, CLI assistance, and legacy code upgrading. |
| Amazon Q Business | Enterprise assistant for internal knowledge search and workflow automation. | All Employees | Connects to 40+ enterprise data sources with built-in access controls. |