Hands-On Lab1,056 words

Hands-On Lab: Effective Prompt Engineering Techniques with Amazon Bedrock

Effective prompt engineering techniques

Prerequisites

Before starting this lab, ensure you have the following ready:

  • AWS Account with administrator or power-user privileges.
  • AWS CLI installed and configured (aws configure) with valid access keys.
  • Amazon Bedrock Model Access granted for Meta Llama 3 8B Instruct (or equivalent text model) in your chosen region (e.g., us-east-1).
  • Basic familiarity with using the command-line interface (CLI) or the AWS Management Console.

Learning Objectives

By completing this lab, you will be able to:

  1. Construct and execute Zero-Shot and Few-Shot prompts using Amazon Bedrock.
  2. Apply model-specific syntax (such as Meta's role tokens) to guide Foundation Models (FMs).
  3. Optimize prompts for brevity to reduce token consumption and computational costs.
  4. Understand the anatomy of a prompt: Instructions, Context, Input Data, and Output.

Architecture Overview

This lab demonstrates interacting directly with Foundation Models hosted on Amazon Bedrock using structured prompts.

Loading Diagram...

The Anatomy of a Prompt

Effective prompt engineering requires structuring your inputs logically so the model understands exactly what to process.

Loading Diagram...

Step-by-Step Instructions

Step 1: Verify Model Access and Execute a Zero-Shot Prompt

Zero-shot prompting involves providing instructions without any examples, relying on the LLM's pre-trained knowledge. We will test a simple sentiment analysis task.

bash
# Verify your CLI is pointing to the correct region where you have model access export AWS_DEFAULT_REGION=<YOUR_REGION> # Invoke the Llama 3 model with a basic zero-shot prompt aws bedrock-runtime invoke-model \ --model-id meta.llama3-8b-instruct-v1:0 \ --body '{"prompt": "Assess the sentiment of the following customer review and classify it as positive, negative, or neutral: The recent update to the software has significantly improved our workflow efficiency.", "max_gen_len": 128, "temperature": 0.1}' \ --cli-binary-format raw-in-base64-out \ --accept "application/json" \ --content-type "application/json" \ step1-output.json cat step1-output.json

📸 Screenshot: The terminal output showing the generated JSON response containing the word "Positive".

Console alternative
  1. Navigate to Amazon Bedrock in the AWS Console.
  2. In the left navigation pane, under Playgrounds, select Text.
  3. Click Select model and choose Meta > Llama 3 8B Instruct.
  4. Paste the prompt text into the text box and click Run.

[!TIP] Notice that in zero-shot prompting, the model might include conversational filler like "The sentiment of this review is..." instead of just returning the requested category.

Step 2: Model-Specific Syntax and Role Prompting

Different models have specific features and syntax. For example, Meta Llama models utilize specific tags to denote system, user, and assistant roles. We will format the prompt to speak the model's native language.

bash
# Create a JSON payload file with the exact Meta model syntax cat << 'EOF' > step2-payload.json { "prompt": "<|begin_of_text|><|start_header_id|>system<|end_header_id|>\nYou are a helpful customer service assistant. Output ONLY the classification category without any conversational text.<|eot_id|><|start_header_id|>user<|end_header_id|>\nAssess the sentiment of the following customer review and classify it as positive, negative, or neutral: The recent update to the software has significantly improved our workflow efficiency.<|eot_id|><|start_header_id|>assistant<|end_header_id|>", "max_gen_len": 128, "temperature": 0.1 } EOF # Invoke the model using the payload file aws bedrock-runtime invoke-model \ --model-id meta.llama3-8b-instruct-v1:0 \ --body file://step2-payload.json \ --cli-binary-format raw-in-base64-out \ step2-output.json cat step2-output.json
Console alternative
  1. In the Text Playground, ensure Llama 3 is selected.
  2. Instead of typing in the main box, locate the System prompt configuration (usually hidden under an expanding menu or in Chat playgrounds).
  3. Add the instruction "Output ONLY the classification category..." there.
  4. Put the review text in the user prompt area and click Run.

Step 3: Few-Shot Prompting

Few-shot prompting provides a small number of examples to guide the LLM's response format and logic. This adapts the model without expensive fine-tuning.

bash
cat << 'EOF' > step3-payload.json { "prompt": "Classify the feedback into categories: Billing, Technical Support, or General Inquiry.\n\nExamples:\nInput: 'I am having trouble updating my payment method.'\nOutput: Billing\n\nInput: 'The app crashes whenever I try to open it.'\nOutput: Technical Support\n\nInput: 'Do you offer annual subscriptions?'\nOutput: General Inquiry\n\nNow classify this:\nInput: 'I was double-charged on my credit card this month.'\nOutput:", "max_gen_len": 50, "temperature": 0 } EOF aws bedrock-runtime invoke-model \ --model-id meta.llama3-8b-instruct-v1:0 \ --body file://step3-payload.json \ --cli-binary-format raw-in-base64-out \ step3-output.json cat step3-output.json

Step 4: Token Optimization for Cost Reduction

LLMs charge based on the number of tokens processed (input + output). Verbose prompts increase costs rapidly. We will optimize a verbose prompt into a concise one to save tokens while maintaining performance.

bash
# Verbose Prompt (High token usage) # "Could you please be so kind as to read the following text and let me know if it sounds like the person is happy, sad, or just neutral? The text is: 'I love this product!' Thank you very much in advance for your help." # Concise Prompt (Low token usage) cat << 'EOF' > step4-payload.json { "prompt": "Classify sentiment (Positive/Negative/Neutral): 'I love this product!'", "max_gen_len": 10, "temperature": 0.1 } EOF aws bedrock-runtime invoke-model \ --model-id meta.llama3-8b-instruct-v1:0 \ --body file://step4-payload.json \ --cli-binary-format raw-in-base64-out \ step4-output.json cat step4-output.json

[!IMPORTANT] Simplifying a prompt can reduce token counts drastically (e.g., by 43%), saving significant costs at scale without compromising the Foundation Model's performance.

Checkpoints

After completing the steps above, run the following verification checks:

  1. Verify Step 1 Output:

    bash
    cat step1-output.json | grep -i "positive"

    Expected Result: You should see the word "positive" in the JSON string output.

  2. Verify Step 3 Few-Shot Output:

    bash
    cat step3-output.json | grep -i "Billing"

    Expected Result: The model should strictly return "Billing" based on the pattern established in your few-shot examples.

Troubleshooting

Error MessageLikely CauseSolution
AccessDeniedExceptionIAM User lacks Bedrock permissions, or End-User License Agreement (EULA) not accepted for the model.Navigate to Bedrock > Model access in the Console and request access to the Meta Llama 3 model. Ensure your IAM user has bedrock:InvokeModel.
ValidationExceptionMalformed JSON payload. Often caused by unescaped quotes or newlines in the CLI body.Use the file:// method demonstrated in Step 2 to pass complex JSON payloads instead of inline strings.
ThrottlingExceptionExceeded the API rate limit for Bedrock invocations.Wait a few seconds and retry. In a production environment, implement exponential backoff.

Clean-Up / Teardown

[!WARNING] Amazon Bedrock charges per token processed. While there are no persistent resources (like instances or endpoints) to delete for standard on-demand model invocations, you should clean up your local files and optionally revoke model access to prevent accidental usage.

  1. Remove Local Files:
    bash
    rm step1-output.json step2-payload.json step2-output.json step3-payload.json step3-output.json step4-payload.json step4-output.json
  2. Revoke Model Access (Optional):
    • Navigate to Amazon Bedrock > Model access in the AWS Console.
    • Click Modify model access.
    • Uncheck the model(s) you enabled for this lab and click Save changes.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free