Curriculum Overview894 words

Curriculum Overview: Factors in Selecting GenAI Models

Identify factors to consider when selecting GenAI models (for example, model types, performance requirements, capabilities, constraints, compliance)

Curriculum Overview: Factors in Selecting GenAI Models

This curriculum overview provides a structured learning path for mastering the selection of Generative AI (GenAI) models. Aligning with the AWS Certified AI Practitioner (AIF-C01) exam objectives, this guide explores the critical dimensions of model selection, including cost, performance, modalities, constraints, and compliance requirements.


Prerequisites

Before diving into the complexities of model selection, learners should have a solid foundation in the following areas:

  • Fundamental AI/ML Concepts: Understanding the difference between AI, Machine Learning, Deep Learning, and Generative AI.
  • Core GenAI Terminology: Familiarity with terms such as tokens, embeddings, context windows, inference, and foundation models (FMs).
  • Basic Cloud Infrastructure: High-level knowledge of how managed APIs (like Amazon Bedrock) differ from self-hosted infrastructure (like Amazon EC2 or SageMaker).
  • Data Types: Ability to distinguish between structured, unstructured, labeled, and unlabeled data.

[!IMPORTANT] If you are unfamiliar with terms like "tokens" or "inference latency," it is highly recommended to review the Fundamentals of Generative AI unit before proceeding with this curriculum.


Module Breakdown

The curriculum is structured into four sequential modules, gradually increasing in complexity from basic capability assessments to strict governance compliance.

ModuleTitleCore FocusDifficulty
Module 1Capabilities & ModalitiesEvaluating what a model can fundamentally do (text, image, audio, multilingual).⭐ Beginner
Module 2Performance & Cost Trade-offsBalancing accuracy, throughput, latency, and token-based pricing.⭐⭐ Intermediate
Module 3Constraints, Licensing & GovernanceNavigating open-source licenses, data residency, and the 5 Scopes of Responsibility.⭐⭐⭐ Advanced
Module 4Model Cards & Practical SelectionUsing Model Cards to make informed, objective, and ethical AI choices.⭐⭐ Intermediate

Model Selection Flow

The following flowchart illustrates the high-level decision process covered across the modules:

Loading Diagram...

Learning Objectives per Module

Module 1: Capabilities & Modalities

  • Identify the correct modality (text-to-text, text-to-image, multimodal) required for specific business use cases.
  • Evaluate a model's multilingual capabilities using benchmarks like XTREME or XNLI, considering script handling and cultural nuances.
  • Assess how input/output length constraints (context windows) impact applications like document summarization.

Module 2: Performance & Cost Trade-offs

  • Calculate direct inference costs (price per 1K tokens) versus indirect costs (computational resources, engineering time).
  • Analyze performance metrics such as accuracy, latency (time to first token), throughput, and memory usage.
  • Compare the cost implications of using out-of-the-box managed APIs versus fine-tuning smaller models.

Module 3: Constraints, Licensing & Governance

  • Interpret AI model licenses (e.g., Llama Community License Agreement) and their restrictions on user scale and derivative works.
  • Differentiate between the levels of control and responsibility (Scope 3: Pretrained APIs, Scope 4: Fine-tuned models, Scope 5: Self-trained models).
  • Apply AWS security disciplines including risk management, data privacy, and governance to the chosen model architecture.

Module 4: Model Cards & Practical Selection

  • Extract critical implementation requirements, ethical considerations, and known biases from standardized Model Cards.
  • Design evaluation prompt sets based on specific subject matter expert (SME) scenarios to test FM response quality.
  • Formulate a final model selection justification based on a holistic review of capabilities, costs, and compliance.

Success Metrics

How will you know you have mastered this curriculum? You should be able to:

  1. Perform Cost-Benefit Analyses: Successfully calculate and defend whether a highly accurate, expensive Large Language Model (LLM) or a cheaper, faster, fine-tuned Small Language Model (SLM) is appropriate for a given task.
  2. Navigate Compliance: Accurately identify when a managed service (like Amazon Bedrock) fulfills data residency requirements versus when a self-hosted deployment (like SageMaker) is legally mandated.
  3. Read Model Cards: Given a new, unseen Model Card, quickly identify three potential biases or limitations that would preclude its use in a healthcare or financial application.

Visualizing the Cost-Performance Trade-off

A key success metric is intuitively understanding the relationship between model size, cost, and performance:

Compiling TikZ diagram…
Running TeX engine…
This may take a few seconds

Notice how performance gains diminish at larger model sizes, while operational costs scale exponentially. Finding the "Sweet Spot" is the core goal of model selection.


Real-World Application

Selecting the right model is rarely a purely technical decision; it is a business decision. Consider the real-world scenario of Building a Corporate HR Chatbot.

Case Study: The HR Chatbot

The Scenario: You are tasked with building a generative AI assistant for the HR department. Subject Matter Experts (SMEs) provide expected user prompts such as:

  • "What are the steps to take leave for the December holiday?"
  • "Create a detailed job description for a data engineer."

The Selection Process:

  1. Capabilities: The chatbot needs to generate text and process large internal policy documents (long input context length). Multilingual support may be required if the company is global.
  2. Performance & Cost: Generating a job description is a generic task that an out-of-the-box API (like Claude on Bedrock) can handle cheaply. However, answering specific leave policy questions requires internal data, meaning you must evaluate the cost of Retrieval-Augmented Generation (RAG) versus Fine-tuning.
  3. Constraints & Compliance: HR data is highly sensitive (PII). Using a public, open-source model without stringent data privacy agreements might violate compliance. You might choose a Scope 3 approach (Managed API via Bedrock) because AWS guarantees your data is not used to train the base model.
  4. Licensing: If you opt to host a model yourself, you must check licenses. If you choose a model like Llama, you must ensure your user base and product scope don't violate the Llama Community License Agreement.

[!TIP] Always start with the Model Card! Before running a single test inference or calculating API costs, read the model card. It serves as the nutritional label for the AI, immediately highlighting supported languages, input formats, and ethical limitations that could instantly rule it out for your use case.

Ready to study AWS Certified AI Practitioner (AIF-C01)?

Practice tests, flashcards, and all study notes — free, no sign-up needed.

Start Studying — Free