Hands-On Lab875 words

Hands-On Lab: Getting Started with Amazon Bedrock and Amazon Q Developer

Amazon Bedrock and Amazon Q

Hands-On Lab: Getting Started with Amazon Bedrock and Amazon Q Developer

Welcome to this guided hands-on lab! In this session, you will explore AWS's primary generative AI platforms. You will learn how to enable and interact with Foundation Models (FMs) using Amazon Bedrock, understand the impact of inference parameters like Temperature, and use Amazon Q Developer as your intelligent coding and AWS assistant.

Prerequisites

Before you begin, ensure you have the following:

  • An active AWS Account with Administrator or PowerUser access.
  • The AWS CLI installed and configured on your local machine (aws configure).
  • IAM Permissions allowing AmazonBedrockFullAccess.
  • Basic familiarity with JSON and terminal commands.

[!WARNING] Some Amazon Bedrock Foundation Models incur charges based on the number of input and output tokens processed. Remember to follow the teardown instructions to remove any files, though simply having model access enabled does not accrue hourly charges.

Learning Objectives

By completing this lab, you will be able to:

  1. Request and configure access to Foundation Models in Amazon Bedrock.
  2. Invoke a generative AI model directly via the AWS CLI to generate text.
  3. Adjust inference parameters (Temperature and Top P) to control model output.
  4. Leverage Amazon Q Developer to ask AWS-specific architectural questions.

Architecture Overview

The following diagram illustrates how you will interact with both Amazon Bedrock and Amazon Q during this lab.

Loading Diagram...

Step-by-Step Instructions

Step 1: Request Model Access in Amazon Bedrock

Before you can use a Foundation Model in Amazon Bedrock, you must explicitly request access to it. This ensures you review and accept the End User License Agreement (EULA) for the specific model provider.

bash
# Note: Due to EULA acceptance requirements, model access # must initially be requested via the AWS Console. aws bedrock list-foundation-models --by-provider Amazon --query "modelSummaries[*].modelId"
Console alternative (REQUIRED for first-time setup)
  1. Log in to the AWS Management Console and navigate to Amazon Bedrock.
  2. In the left navigation pane, select Model access.
  3. Click the Manage model access button.
  4. Check the box next to Titan Text G1 - Lite (under the Amazon provider).
  5. Click Request model access at the bottom of the page.
  6. Wait for the Access status to change to Access granted.

📸 Screenshot: Model Access page showing "Access granted" next to Amazon Titan.

[!TIP] Amazon Titan models are typically granted instantly. Third-party models like Anthropic Claude may require additional use-case details to be submitted.

Step 2: Invoke a Foundation Model via CLI

Now that you have access, let's invoke the model to generate a response. We will pass a simple prompt asking the model to explain cloud computing.

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body '{"inputText": "Explain the concept of Generative AI in one short sentence.", "textGenerationConfig": {"maxTokenCount": 50, "temperature": 0.5}}' \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ output.txt
Console alternative
  1. In the Amazon Bedrock console, go to Playgrounds > Text.
  2. Click Select model and choose Amazon > Titan Text G1 - Lite.
  3. Type your prompt in the chat box.
  4. Click Run to see the generated response.

Step 3: Experiment with Inference Parameters

Generative AI models use parameters like temperature and topP to control the randomness and creativity of the output.

Loading Diagram...

Run the model again, but this time set the temperature to 0.0 for a highly deterministic response.

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body '{"inputText": "Write a haiku about cloud computing.", "textGenerationConfig": {"temperature": 0.0}}' \ --cli-binary-format raw-in-base64-out \ output_deterministic.txt

Step 4: Consult Amazon Q Developer

Amazon Q Developer is your AI assistant for software development and AWS knowledge. Let's use it to understand the invoke-model command we just ran.

bash
# If you have the Amazon Q CLI installed: q "What does the --cli-binary-format raw-in-base64-out flag do in the AWS CLI?"
Console alternative
  1. Look for the Amazon Q icon on the right-hand sidebar of the AWS Management Console.
  2. Open the chat panel.
  3. Ask: "Why do I need to use --cli-binary-format raw-in-base64-out when calling Amazon Bedrock from the CLI?"
  4. Review Amazon Q's response, which will explain that it prevents the AWS CLI from interpreting the binary output incorrectly, treating the JSON payload correctly.

📸 Screenshot: Amazon Q chat panel with the response and source citations.

Checkpoints

Verify your progress by running the following checks:

Checkpoint 1: Read the Model Output

bash
cat output.txt

Expected Result: A JSON response containing a results array with the generated text explaining Generative AI.

Checkpoint 2: Verify Deterministic Output

bash
cat output_deterministic.txt

Expected Result: A JSON response containing a short 3-line poem (haiku) about cloud computing.

Clean-Up / Teardown

Because Amazon Bedrock models are serverless and charged per-token, you are not charged for idle time. However, it is good practice to clean up your local files.

bash
# Remove the generated output files rm output.txt output_deterministic.txt # Optional: Verify files are deleted ls -l output*.txt

[!WARNING] If you configured Provisioned Throughput for Amazon Bedrock (not covered in this basic lab), you must delete it in the console to avoid significant ongoing hourly charges.

Troubleshooting

Common ErrorCauseSolution
AccessDeniedExceptionYou did not request access to the Foundation Model.Go to Bedrock Console > Model Access and request access to Amazon Titan.
ValidationExceptionMalformed JSON in the --body parameter.Ensure you are using single quotes around the entire JSON body and double quotes for keys/values.
UnrecognizedClientExceptionYour AWS CLI is not configured with valid credentials.Run aws configure and input your Access Key and Secret Key.
Could not connect to the endpoint URLBedrock might not be supported in your default region.Append --region us-east-1 to your AWS CLI commands.

Concept Review

FeatureAmazon BedrockAmazon Q Developer
Primary Use CaseBuilding GenAI applications via APIsAssisting developers with coding and AWS architecture
InterfaceAPI, CLI, AWS Console PlaygroundsIDE Plugin, Terminal CLI, AWS Console Sidebar
CustomizationFine-tuning, RAG, Knowledge BasesOrganizational context, codebase indexing
Pricing ModelPay per input/output tokenFree tier available, Pro subscription per user

Stretch Challenge

Want to test your skills? Try using Amazon Q Developer to write a Python script (using boto3) that automates Step 2. Then, run the Python script to invoke the amazon.titan-text-lite-v1 model without using the AWS CLI directly.

Show solution
python
import boto3 import json client = boto3.client('bedrock-runtime', region_name='us-east-1') payload = { "inputText": "What are the benefits of AWS?", "textGenerationConfig": {"temperature": 0.7} } response = client.invoke_model( modelId='amazon.titan-text-lite-v1', contentType='application/json', accept='application/json', body=json.dumps(payload) ) response_body = json.loads(response['body'].read()) print(response_body['results'][0]['outputText'])

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free