☁️ AWS

AWS Certified AI Practitioner (AIF-C01)

Comprehensive AWS Certified AI Practitioner (AIF-C01) hive provides study notes, question bank with practice tests, flashcards, and hands-on labs, all supported by a personal AI tutor to help you master the AWS Certified AI Practitioner (AIF-C01) certification.

353
Practice Questions
5
Mock Exams
145
Study Notes
340
Flashcard Decks
2
Source Materials
Start Studying — Free0 learners studying this hive

Study Notes & Guides

145 AI-generated study notes covering the full AWS Certified AI Practitioner (AIF-C01) curriculum.

Curriculum Overview: AI Concepts and Terminology (AIF-C01)

AI Concepts and Terminology

785 words

Hands-On Lab: Exploring AI Concepts with AWS Managed Services

AI Concepts and Terminology

940 words

Hands-On Lab: Exploring Basic AI Concepts and Terminology

AI Concepts and Terminology

962 words

Curriculum Overview: Mastering Amazon Bedrock and Amazon Q

Amazon Bedrock and Amazon Q

820 words

Hands-On Lab: Building and Automating with Amazon Bedrock and Amazon Q

Amazon Bedrock and Amazon Q

1,058 words

Hands-On Lab: Building with Amazon Bedrock and Amazon Q

Amazon Bedrock and Amazon Q

894 words

Hands-On Lab: Getting Started with Amazon Bedrock and Amazon Q Developer

Amazon Bedrock and Amazon Q

875 words

Curriculum Overview: Applying Natural Language Processing Services

Apply Natural Language Processing services

765 words

Curriculum Overview: Apply Natural Language Processing Services

Apply Natural Language Processing services

895 words

AWS Certified AI Practitioner (AIF-C01) Curriculum Overview

AWS Certified AI Practitioner (AIF-C01)

685 words

Curriculum Overview: AWS Infrastructure for Generative AI Applications

AWS infrastructure and technologies for building GenAI applications

780 words

Hands-On Lab: Building GenAI Applications with Amazon Bedrock

AWS infrastructure and technologies for building GenAI applications

912 words

Hands-On Lab: Getting Started with AWS GenAI Infrastructure using Amazon Bedrock

AWS infrastructure and technologies for building GenAI applications

948 words

Curriculum Overview: Comparing AI, ML, Deep Learning, and GenAI

Compare AI, ML, Deep Learning, and GenAIDescribe the similarities and differences between AI, ML, GenAI, and deep learning

966 words

Curriculum Overview: Core Generative AI Concepts (AWS AIF-C01)

Core GenAI Concepts

782 words

Hands-On Lab: Core GenAI Concepts and Inference via Amazon Bedrock

Core GenAI Concepts

1,058 words

Hands-On Lab: Exploring Core GenAI Concepts with Amazon Bedrock

Core GenAI Concepts

1,058 words

Curriculum Overview: Amazon SageMaker's Role in the ML Lifecycle

Define Amazon SageMaker's role

820 words

Curriculum Overview: The Role of Amazon SageMaker in the ML Lifecycle

Define Amazon SageMaker's role

831 words

Curriculum Overview: Fundamentals of AI and Machine Learning Terminology

Define basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, fit, large language models(LLMs))

863 words

Curriculum Overview: Foundational Generative AI Concepts

Define foundational GenAI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models [FMs], multimodal models, diffusion models)

796 words

Curriculum Overview: Methods for Fine-Tuning Foundation Models

Define methods for fine-tuning an FM (for example, instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training)

863 words

Curriculum Overview: Risks and Limitations of Prompt Engineering

Define potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking)

813 words

Curriculum Overview: Responsible Practices for AI Model Selection

Define responsible practices to select a model (for example, environmental considerations, sustainability)

923 words

Responsible AI: Practices for Sustainable Model Selection

Define responsible practices to select a model (for example, environmental considerations, sustainability)

923 words

Curriculum Overview: Retrieval-Augmented Generation (RAG) & Business Applications

Define Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock Knowledge Bases)

940 words

Curriculum Overview: Prompt Engineering Techniques

Define techniques for prompt engineering (for example, chain-of-thought, zero-shot, single-shot, few-shot, prompt templates)

870 words

Curriculum Overview: Concepts and Constructs of Prompt Engineering

Define the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space, prompt routing)

786 words

Amazon Bedrock Capabilities: Curriculum Overview

Describe Amazon Bedrock capabilities

863 words

Amazon Bedrock Capabilities & Foundation Models

Describe Amazon Bedrock capabilities

947 words

Secure Data Engineering for AI: Curriculum Overview

Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity)

913 words

Secure Data Engineering for AI: Curriculum Overview

Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity)

923 words

Curriculum Overview: Components of the Machine Learning Pipeline

Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring)

851 words

Curriculum Overview: Cost Tradeoffs of AWS GenAI Services

Describe cost tradeoffs of AWS GenAI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models)

863 words

Curriculum Overview: AI Data Governance Strategies

Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention)

765 words

Curriculum Overview: Data Governance Strategies for AI Systems

Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention)

792 words

Curriculum Overview: Bias, Variance, and Responsible AI

Describe effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting)

895 words

Curriculum Overview: Bias, Variance, and Their Effects in Machine Learning

Describe effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting)

863 words

Curriculum Overview: Fundamental Concepts of MLOps

Describe fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training)

860 words

Curriculum Overview: Data Preparation for Fine-Tuning Foundation Models

Describe how to prepare data to fine-tune an FM (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF])

894 words

Preparing Data for Foundation Model Fine-Tuning: Curriculum Overview

Describe how to prepare data to fine-tune an FM (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF])

815 words

Methods to Use a Model in Production: Curriculum Overview

Describe methods to use a model in production (for example, managed API service, self-hosted API)

767 words

Curriculum Overview: Evaluating ML Models - Technical and Business Metrics

Describe model performance metrics (for example, accuracy, Area Under the Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models

873 words

Curriculum Overview: Human-Centered Design for Explainable AI

Describe principles of human-centered design for explainable AI

863 words

Curriculum Overview: Principles of Human-Centered Design for Explainable AI

Describe principles of human-centered design for explainable AI

863 words

Curriculum Overview: AI Governance Protocols and Strategies

Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements)

811 words

Curriculum Overview: AI Governance Protocols & Security Frameworks

Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements)

820 words

Curriculum Overview: Security and Privacy Considerations for AI Systems

Describe security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit)

878 words

Curriculum Overview: Security and Privacy Considerations for AI Systems

Describe security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit)

861 words

Curriculum Overview: Sources of ML Models and Customization Strategies

Describe sources of ML models (for example, open source pre-trained models, training custom models)

923 words

Showing 50 of 145 study notes. View all →

Sample Practice Questions

Try 5 sample questions from a bank of 353.

Q1.An organization is building a document processing pipeline and needs to distinguish between general information for indexing and sensitive data for privacy compliance. Which statement best explains the functional difference between **Entity Recognition** and **Personally Identifiable Information (PII) Detection** in Amazon Comprehend?

A.Entity Recognition is designed for broad document categorization into labels like PERSON and LOCATION, whereas PII Detection is specialized to identify and mask sensitive items like Social Security Numbers and bank account details.
B.Entity Recognition is the primary tool for data privacy because it automatically redacts all PERSON and LOCATION labels from the text output to ensure compliance.
C.PII Detection is a feature exclusive to Amazon Comprehend Medical, while standard Amazon Comprehend only supports general Entity Recognition and Sentiment Analysis.
D.PII Detection functions similarly to a custom vocabulary in Amazon Transcribe, requiring the user to upload a predefined list of sensitive terms before detection can occur.
Show answer

Correct: A

Q2.When comparing Retrieval-Augmented Generation (RAG) and Fine-tuning for foundation model (FM) customization, which statement correctly explains a strategic trade-off between the two methods?

A.Fine-tuning is the preferred method for real-time data updates because the model weights can be instantly updated during each inference cycle to reflect new facts.
B.RAG generally offers higher model transparency and lower hallucinations compared to fine-tuning because it provides verifiable source citations for the information it retrieves.
C.RAG has a higher upfront cost than fine-tuning because setting up a vector database requires significantly more GPU compute hours than model training.
D.Fine-tuning typically results in higher inference-time latency than RAG because it requires the model to perform internal lookups of training data for every request.
Show answer

Correct: B

Q3.A machine learning engineer has deployed a credit risk assessment model that was tested for demographic fairness using Amazon SageMaker Clarify during the training phase. The model's Demographic Parity Difference ($DPD$) was within the acceptable threshold of $0.05$. To ensure the model remains fair as real-world data evolves, the engineer must implement a solution that continuously monitors the endpoint for bias drift. Which workflow should the engineer apply to achieve this using Amazon SageMaker Model Monitor?

A.Configure a **SageMaker Model Quality Monitor** to compare the predicted labels against ground truth labels stored in S3 to ensure that the precision for protected groups does not drop below the baseline.
B.Enable **Data Capture** on the SageMaker endpoint, establish a baseline by running a **SageMaker Clarify processing job** on the training dataset, and schedule a **Bias Drift Monitoring job** to compare captured inference data against the baseline.
C.Set up a **SageMaker Data Quality Monitor** to identify statistical shifts in the mean and variance of sensitive feature columns and trigger a CloudWatch Alarm if the distribution shifts significantly from the training set.
D.Use **SageMaker Model Cards** to document the model's fairness metrics and set a recurring AWS Lambda function to poll the endpoint's metrics and update the model card every 24 hours.
Show answer

Correct: B

Q4.Which of the following best describes the core capabilities and purpose of Amazon SageMaker AI within the machine learning (ML) workflow?

A.It is a fully managed service that provides integrated tools for every stage of the machine learning lifecycle, including data preparation, model building, training, and production deployment.
B.It is a set of pre-trained AI APIs designed for developers to add specific tasks like language translation and speech-to-text to applications without managing any ML infrastructure.
C.It is a specialized infrastructure service that focuses exclusively on providing optimized GPU-based compute instances for the model training phase of a project.
D.It is a managed serverless database service designed to store and version the large-scale datasets used in artificial intelligence research.
Show answer

Correct: A

Q5.Which of the following best explains the fundamental difference in how decisions are reached in traditional rule-based AI systems compared to machine learning models?

A.Traditional AI systems use statistical algorithms to identify patterns in raw data, whereas machine learning models require human experts to manually code every logical decision path.
B.Traditional AI relies on explicit, human-coded logic and IF-THEN rules, while machine learning uses data to train algorithms that identify patterns and develop their own predictive models.
C.Traditional AI is specifically designed to handle unstructured data like images and audio, while machine learning is limited to processing structured numerical data via simple math.
D.Machine learning models are inherently more transparent and explainable than traditional AI because their logic is derived directly from mathematical proofs rather than human intuition.
Show answer

Correct: B

Want more? Clone this hive to access all 353 questions, timed exams, and AI tutoring. Start studying →

Flashcard Collections

340 flashcard decks for spaced-repetition study.

5 cards

AI and Machine Learning Fundamentals

Sample:

**Artificial Intelligence (AI)** vs. **Machine Learning (ML)** vs. **Deep Learning (DL)**

5 cards

AI, ML, Deep Learning, and Generative AI Fundamentals

Sample:

**Artificial Intelligence (AI)**

5 cards

Types of AI Inferencing

Sample:

**Inference**

5 cards

Types of Machine Learning: Supervised, Unsupervised, & Reinforcement

Sample:

**Supervised Learning**

5 cards

Data Types in AI Models

Sample:

**Labeled vs. Unlabeled Data**

5 cards

Applications and Business Value of AI/ML

Sample:

**Automation**

Ready to ace AWS Certified AI Practitioner (AIF-C01)?

Clone this hive to get full access to all 353 practice questions, 5 timed mock exams, study notes, flashcards, and a personal AI tutor — completely free.

Start Studying — Free