Hands-On Lab: Getting Started with Amazon Bedrock and Amazon Q Developer
Amazon Bedrock and Amazon Q
Hands-On Lab: Getting Started with Amazon Bedrock and Amazon Q Developer
Welcome to this guided hands-on lab! In this session, you will explore AWS's primary generative AI platforms. You will learn how to enable and interact with Foundation Models (FMs) using Amazon Bedrock, understand the impact of inference parameters like Temperature, and use Amazon Q Developer as your intelligent coding and AWS assistant.
Prerequisites
Before you begin, ensure you have the following:
- An active AWS Account with Administrator or PowerUser access.
- The AWS CLI installed and configured on your local machine (
aws configure). - IAM Permissions allowing
AmazonBedrockFullAccess. - Basic familiarity with JSON and terminal commands.
[!WARNING] Some Amazon Bedrock Foundation Models incur charges based on the number of input and output tokens processed. Remember to follow the teardown instructions to remove any files, though simply having model access enabled does not accrue hourly charges.
Learning Objectives
By completing this lab, you will be able to:
- Request and configure access to Foundation Models in Amazon Bedrock.
- Invoke a generative AI model directly via the AWS CLI to generate text.
- Adjust inference parameters (Temperature and Top P) to control model output.
- Leverage Amazon Q Developer to ask AWS-specific architectural questions.
Architecture Overview
The following diagram illustrates how you will interact with both Amazon Bedrock and Amazon Q during this lab.
Step-by-Step Instructions
Step 1: Request Model Access in Amazon Bedrock
Before you can use a Foundation Model in Amazon Bedrock, you must explicitly request access to it. This ensures you review and accept the End User License Agreement (EULA) for the specific model provider.
# Note: Due to EULA acceptance requirements, model access
# must initially be requested via the AWS Console.
aws bedrock list-foundation-models --by-provider Amazon --query "modelSummaries[*].modelId"▶Console alternative (REQUIRED for first-time setup)
- Log in to the AWS Management Console and navigate to Amazon Bedrock.
- In the left navigation pane, select Model access.
- Click the Manage model access button.
- Check the box next to Titan Text G1 - Lite (under the Amazon provider).
- Click Request model access at the bottom of the page.
- Wait for the Access status to change to
Access granted.
📸 Screenshot: Model Access page showing "Access granted" next to Amazon Titan.
[!TIP] Amazon Titan models are typically granted instantly. Third-party models like Anthropic Claude may require additional use-case details to be submitted.
Step 2: Invoke a Foundation Model via CLI
Now that you have access, let's invoke the model to generate a response. We will pass a simple prompt asking the model to explain cloud computing.
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body '{"inputText": "Explain the concept of Generative AI in one short sentence.", "textGenerationConfig": {"maxTokenCount": 50, "temperature": 0.5}}' \
--cli-binary-format raw-in-base64-out \
--accept "application/json" \
--content-type "application/json" \
output.txt▶Console alternative
- In the Amazon Bedrock console, go to Playgrounds > Text.
- Click Select model and choose Amazon > Titan Text G1 - Lite.
- Type your prompt in the chat box.
- Click Run to see the generated response.
Step 3: Experiment with Inference Parameters
Generative AI models use parameters like temperature and topP to control the randomness and creativity of the output.
Run the model again, but this time set the temperature to 0.0 for a highly deterministic response.
aws bedrock-runtime invoke-model \
--model-id amazon.titan-text-lite-v1 \
--body '{"inputText": "Write a haiku about cloud computing.", "textGenerationConfig": {"temperature": 0.0}}' \
--cli-binary-format raw-in-base64-out \
output_deterministic.txtStep 4: Consult Amazon Q Developer
Amazon Q Developer is your AI assistant for software development and AWS knowledge. Let's use it to understand the invoke-model command we just ran.
# If you have the Amazon Q CLI installed:
q "What does the --cli-binary-format raw-in-base64-out flag do in the AWS CLI?"▶Console alternative
- Look for the Amazon Q icon on the right-hand sidebar of the AWS Management Console.
- Open the chat panel.
- Ask: "Why do I need to use --cli-binary-format raw-in-base64-out when calling Amazon Bedrock from the CLI?"
- Review Amazon Q's response, which will explain that it prevents the AWS CLI from interpreting the binary output incorrectly, treating the JSON payload correctly.
📸 Screenshot: Amazon Q chat panel with the response and source citations.
Checkpoints
Verify your progress by running the following checks:
Checkpoint 1: Read the Model Output
cat output.txtExpected Result: A JSON response containing a results array with the generated text explaining Generative AI.
Checkpoint 2: Verify Deterministic Output
cat output_deterministic.txtExpected Result: A JSON response containing a short 3-line poem (haiku) about cloud computing.
Clean-Up / Teardown
Because Amazon Bedrock models are serverless and charged per-token, you are not charged for idle time. However, it is good practice to clean up your local files.
# Remove the generated output files
rm output.txt output_deterministic.txt
# Optional: Verify files are deleted
ls -l output*.txt[!WARNING] If you configured Provisioned Throughput for Amazon Bedrock (not covered in this basic lab), you must delete it in the console to avoid significant ongoing hourly charges.
Troubleshooting
| Common Error | Cause | Solution |
|---|---|---|
AccessDeniedException | You did not request access to the Foundation Model. | Go to Bedrock Console > Model Access and request access to Amazon Titan. |
ValidationException | Malformed JSON in the --body parameter. | Ensure you are using single quotes around the entire JSON body and double quotes for keys/values. |
UnrecognizedClientException | Your AWS CLI is not configured with valid credentials. | Run aws configure and input your Access Key and Secret Key. |
Could not connect to the endpoint URL | Bedrock might not be supported in your default region. | Append --region us-east-1 to your AWS CLI commands. |
Concept Review
| Feature | Amazon Bedrock | Amazon Q Developer |
|---|---|---|
| Primary Use Case | Building GenAI applications via APIs | Assisting developers with coding and AWS architecture |
| Interface | API, CLI, AWS Console Playgrounds | IDE Plugin, Terminal CLI, AWS Console Sidebar |
| Customization | Fine-tuning, RAG, Knowledge Bases | Organizational context, codebase indexing |
| Pricing Model | Pay per input/output token | Free tier available, Pro subscription per user |
Stretch Challenge
Want to test your skills? Try using Amazon Q Developer to write a Python script (using boto3) that automates Step 2. Then, run the Python script to invoke the amazon.titan-text-lite-v1 model without using the AWS CLI directly.
▶Show solution
import boto3
import json
client = boto3.client('bedrock-runtime', region_name='us-east-1')
payload = {
"inputText": "What are the benefits of AWS?",
"textGenerationConfig": {"temperature": 0.7}
}
response = client.invoke_model(
modelId='amazon.titan-text-lite-v1',
contentType='application/json',
accept='application/json',
body=json.dumps(payload)
)
response_body = json.loads(response['body'].read())
print(response_body['results'][0]['outputText'])