Hands-On Lab: Exploring Practical AI Use Cases with AWS Managed Services
Practical use cases for AI
Prerequisites
Before starting this lab, ensure you have the following in place:
- AWS Account: An active AWS account. (Free-tier eligibility is sufficient for this lab).
- AWS CLI: Installed and configured (
aws configure) with an IAM user that has access to AWS AI services. - IAM Permissions: Your user needs permissions for
comprehend:DetectSentiment,comprehend:DetectEntities, andrekognition:DetectLabels. - Terminal: A standard bash/zsh/powershell terminal.
- Prior Knowledge: Basic understanding of terminal commands and cloud computing concepts.
Learning Objectives
By completing this 30-minute guided lab, you will be able to:
- Identify practical use cases for AI across different domains (Customer Service, Finance, Manufacturing).
- Apply Natural Language Processing (NLP) using Amazon Comprehend to extract sentiment and entities from text.
- Apply Computer Vision using Amazon Rekognition to detect labels in an image for inventory tracking.
- Determine when to use AI vs. traditional software based on the need for probabilistic versus deterministic outcomes.
Architecture Overview
In this lab, you will act as a client application (via the CLI or Console) interacting directly with AWS fully-managed AI services. These API-driven services require no underlying infrastructure management.
Step-by-Step Instructions
Step 1: Customer Service - Analyzing Text Sentiment
Use Case: Chatbots handle routine inquiries, but when a customer expresses high frustration, the system should escalate to a human agent. We will use Natural Language Processing (NLP) to determine customer sentiment.
We will use Amazon Comprehend to analyze a simulated customer feedback message.
aws comprehend detect-sentiment \
--language-code en \
--text "I have been trying to reset my password for three days and the system keeps crashing. This is completely unacceptable!"[!TIP] Notice that the output is probabilistic. The AI model returns confidence scores for
POSITIVE,NEGATIVE,NEUTRAL, andMIXEDrather than a simple true/false.
▶Console alternative
- Log into the AWS Management Console.
- Search for and select Amazon Comprehend.
- In the left navigation pane, choose Real-time analysis.
- Scroll down to the Input text box and paste the sample sentence.
- Click Analyze.
- View the Sentiment tab below to see the results.
📸 Screenshot: Look for the "Results" panel showing the dominant sentiment and confidence score bars.
Step 2: Finance - Automated Data Entry via Entity Recognition
Use Case: AI models extract information from financial documents or unstructured text, reducing manual errors and speeding up data processing.
We will extract key financial entities (like Organizations, Dates, and Quantities) from an unstructured string of text.
aws comprehend detect-entities \
--language-code en \
--text "On October 24th, 2023, AnyCompany Corp purchased 500 server units from Amazon Web Services for a total of $45,000."[!NOTE] In the JSON output, look for the
Typefield. You should seeORGANIZATION(AnyCompany Corp, Amazon Web Services),DATE(October 24th, 2023), andQUANTITY(500 server units, $45,000).
▶Console alternative
- Still in the Amazon Comprehend > Real-time analysis section.
- Paste the new financial sentence into the Input text box.
- Click Analyze.
- Click the Entities tab in the results pane to view the color-coded extracted data points.
Step 3: Manufacturing - Inventory Tracking with Computer Vision
Use Case: Automating the tracking of physical goods in a warehouse or supply chain using cameras and Computer Vision to increase productivity.
We will download a sample image and use Amazon Rekognition to detect the objects (labels) within it.
First, download a sample image of a warehouse/logistics environment:
curl -o box.jpg https://raw.githubusercontent.com/awslabs/amazon-rekognition-developer-guide/master/images/package.jpgNext, pass the image file to Amazon Rekognition to identify its contents:
aws rekognition detect-labels \
--image-bytes fileb://box.jpg \
--max-labels 5 \
--min-confidence 75[!TIP] The
fileb://prefix is required in the AWS CLI to tell it to read the file as binary data. We set--max-labels 5to limit the output to the top 5 most confident predictions.
▶Console alternative
- Search for and open Amazon Rekognition in the console.
- In the left navigation, under Core computer vision, choose Label detection.
- Under the Analyze image section, click Upload and select the
box.jpgfile you downloaded. - Review the results panel on the right side to see the detected labels and their confidence scores.
📸 Screenshot: The image with bounding boxes drawn around detected objects (e.g., "Package", "Cardboard", "Box").
Checkpoints
Verify your progress by ensuring you can answer "Yes" to these questions:
- Checkpoint 1: Did the Step 1 command output a JSON block where the `