Hands-On Lab1,058 words

Hands-On Lab: Core GenAI Concepts and Inference via Amazon Bedrock

Core GenAI Concepts

Hands-On Lab: Core GenAI Concepts and Inference via Amazon Bedrock

Welcome to this guided lab on Core GenAI Concepts. In this lab, we will transition from generative AI theory to practice. You will interact with Foundation Models (FMs), manipulate inference parameters like temperature, and observe foundational concepts such as tokenization, prompt engineering, and nondeterminism using Amazon Bedrock.

Prerequisites

Before starting this lab, ensure you have the following:

  • An active AWS Account with AdministratorAccess or sufficient IAM permissions for Amazon Bedrock and S3.
  • The AWS CLI (aws) installed, updated, and configured with your credentials.
  • Basic familiarity with terminal/command-line operations.
  • Prior Knowledge: Understanding of AI/ML terminology, including tokens, embeddings, deep learning, and Foundation Models (FMs).

[!WARNING] Cost Estimate: This lab uses Amazon Bedrock on-demand inference and standard Amazon S3 storage. Completing this lab should cost less than $0.50 USD. Remember to run the teardown commands to avoid any ongoing storage charges.

Learning Objectives

Upon completing this lab, you will be able to:

  1. Construct structured prompts and successfully invoke a Foundation Model via the AWS CLI.
  2. Manipulate inference parameters (such as temperature and topP) to observe deterministic vs. nondeterministic (creative) model outputs.
  3. Implement a basic logging architecture using Amazon S3 to store generated AI outputs.
  4. Identify and troubleshoot common API and access limitations when working with Generative AI services.

Architecture Overview

The following diagram illustrates the flow of our GenAI inference architecture. You will send prompts containing specific inference parameters to Amazon Bedrock, which passes the tokens to a Transformer-based Large Language Model (LLM). The output is then saved to an S3 bucket for audit and review.

Loading Diagram...

Here is a conceptual look at the internal process the Foundation Model executes when you submit your text:

Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds

Step-by-Step Instructions

Step 1: Create an S3 Bucket for Output Logs

We will first provision a simple S3 bucket to store our generated responses. This simulates a real-world MLOps pipeline where inputs and outputs are logged for model evaluation (e.g., checking for hallucinations).

bash
aws s3 mb s3://brainybee-lab-genai-<YOUR_ACCOUNT_ID> --region us-east-1
📸 Console alternative
  1. Navigate to the S3 Console.
  2. Click Create bucket.
  3. Enter the bucket name: brainybee-lab-genai-<YOUR_ACCOUNT_ID>.
  4. Select region us-east-1.
  5. Leave all other settings as default and click Create bucket.

[!TIP] Replace <YOUR_ACCOUNT_ID> with your actual 12-digit AWS account number to ensure the bucket name is globally unique.

Step 2: Request Model Access in Amazon Bedrock

By default, foundation models are not universally enabled to protect users from accidental charges. We must explicitly request access to the Amazon Titan Text G1 - Lite model.

bash
# Verification command (Model access must be granted via the AWS Console first) aws bedrock get-foundation-model --model-identifier amazon.titan-text-lite-v1
📸 Console step (Required for first-time use)
  1. Navigate to the Amazon Bedrock console in us-east-1.
  2. In the left navigation pane, scroll down and select Model access.
  3. Click Manage model access in the top right.
  4. Check the box next to Titan Text G1 - Lite (under the Amazon provider).
  5. Click Save changes at the bottom. Access is usually granted instantly.

Step 3: Craft Your First Prompt (Zero-Shot)

Let's test the concept of Prompt Engineering. We will create a configuration file that dictates the model's behavior. We are keeping the temperature at 0.0 to ensure a highly deterministic, predictable output.

Create a file named prompt-params.json with the following content:

bash
cat <<EOF > prompt-params.json { "inputText": "Explain the difference between Machine Learning and Deep Learning in one simple sentence.", "textGenerationConfig": { "maxTokenCount": 100, "stopSequences": [], "temperature": 0.0, "topP": 0.9 } } EOF

[!TIP] Tokens vs. Words: Notice maxTokenCount. A token is the smallest unit of text the model processes. 100 tokens is roughly 75 words.

Step 4: Invoke the Foundation Model

Now, we will send this structured prompt to the Amazon Titan model using the Bedrock Runtime API and save the response to output.json.

bash
aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body file://prompt-params.json \ --cli-binary-format raw-in-base64-out \ --accept application/json \ --content-type application/json \ output.json
📸 Console alternative
  1. Navigate to Amazon Bedrock > Playgrounds > Text.
  2. Select Amazon as the category and Titan Text G1 - Lite as the model.
  3. Paste the prompt: Explain the difference between Machine Learning and Deep Learning in one simple sentence.
  4. Set Temperature to 0.0 and Maximum length (tokens) to 100.
  5. Click Run.

Step 5: Review Output and Upload to S3

Let's read the JSON response to see what the model generated, then store it in our bucket for safekeeping.

bash
cat output.json # Upload the result to S3 aws s3 cp output.json s3://brainybee-lab-genai-<YOUR_ACCOUNT_ID>/logs/output-run1.json

[!NOTE] If you look at the JSON, you will see a completion field containing the text, and often metadata about the input/output token counts. Tracking these tokens is essential because token usage drives the cost of GenAI applications.

Step 6: Explore Nondeterminism (High Temperature)

Generative AI models are fundamentally probabilistic. By changing the temperature parameter, we alter how the model selects the next token. A high temperature flattens the probability distribution, leading to more creative, nondeterministic outputs (which also increases the risk of hallucinations).

Let's modify the prompt to encourage a creative response with temperature: 0.9.

bash
cat <<EOF > prompt-creative.json { "inputText": "Write a short, creative slogan for a company that sells AI-powered coffee mugs.", "textGenerationConfig": { "maxTokenCount": 50, "temperature": 0.9, "topP": 1.0 } } EOF aws bedrock-runtime invoke-model \ --model-id amazon.titan-text-lite-v1 \ --body file://prompt-creative.json \ --cli-binary-format raw-in-base64-out \ --accept application/json \ --content-type application/json \ creative-output.json cat creative-output.json

Run this command multiple times. You will notice that the slogan changes with each execution due to the high temperature.

Checkpoints

Validate your progress by running these verification commands:

  1. Check Bedrock Model Availability: Ensure you have access to models in your region.

    bash
    aws bedrock list-foundation-models --by-provider Amazon --query "modelSummaries[?modelId=='amazon.titan-text-lite-v1'].modelId" --output text

    Expected Output: amazon.titan-text-lite-v1

  2. Verify S3 Log Creation: Check that your initial output was successfully saved.

    bash
    aws s3 ls s3://brainybee-lab-genai-<YOUR_ACCOUNT_ID>/logs/

    Expected Output: ... output-run1.json

Troubleshooting

Error / IssueProbable CauseFix
AccessDeniedExceptionIAM User lacks Bedrock permissions, or End-User License Agreement (EULA) not accepted for the model.Go to the Bedrock Console > Model access and request access to Titan Text G1 - Lite. Ensure your IAM user has AmazonBedrockFullAccess.
InvalidParameterExceptionThe JSON payload has syntax errors or unsupported parameters for the specific model.Double-check prompt-params.json. Different models (e.g., Claude vs. Titan) have different required JSON schema structures.
ExpiredTokenAWS CLI credentials have expired.Re-authenticate your CLI session (e.g., aws sso login or update access keys).
Bucket name already existsS3 bucket names must be globally unique across all AWS customers.Append random numbers or your account ID to the bucket name.

Clean-Up / Teardown

[!WARNING] Remember to run the teardown commands to avoid ongoing charges. While Bedrock models incur charges only per invocation (token-based pricing), S3 buckets incur storage charges over time.

Run the following commands to delete ALL provisioned resources and local files:

bash
# 1. Empty the S3 bucket aws s3 rm s3://brainybee-lab-genai-<YOUR_ACCOUNT_ID> --recursive # 2. Delete the S3 bucket aws s3 rb s3://brainybee-lab-genai-<YOUR_ACCOUNT_ID> # 3. Clean up local lab files rm prompt-params.json prompt-creative.json output.json creative-output.json

Note: You do not need to "turn off" the Bedrock Foundation Model. It is a fully managed serverless API, so you are only billed when you actively invoke it.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free