AI in Medical Education: Curriculum, Competency, and the Future Physician

A 2025 scoping review of AI in medical education identified 310 publications on the topic, with 52% appearing after the November 2022 release of ChatGPT (Simoni et al., 2025). Medical schools worldwide are scrambling to develop curricula for a technology that did not exist when most faculty completed their training. The evidence base for what to teach, when to teach it, and how to assess competency remains nascent.

Learning Objectives

After reading this chapter, you will be able to:

  • Understand the five-domain competency framework for AI in medical education from the Macy Foundation report
  • Evaluate integration models for AI across undergraduate, graduate, and continuing medical education
  • Recognize the tension between AI-assisted learning and foundational skill development
  • Apply the FACETS framework for assessing AI educational interventions
  • Identify the 23 AI competencies validated by Delphi consensus for physician training
  • Develop faculty development strategies for AI education
  • Compare international approaches to AI medical education (UK, EU, Asia-Pacific)
  • Assess learners on AI-specific competencies using validated frameworks

The Education Imperative:

Medical education must prepare physicians who can effectively use AI tools while maintaining the independent clinical reasoning that AI cannot replace. This requires teaching both technical competency (what AI can and cannot do) and critical appraisal skills (when to trust, question, or override AI recommendations).

Key Frameworks:

Framework Source Focus
5-Domain Framework Macy Foundation 2025 Foundational concepts, ethical reasoning, human-AI collaboration, workflow integration, practice-based learning
FACETS BEME Guide 84 (2024) Assessment taxonomy for AI educational interventions
23 Competencies Delphi Consensus 2022 Validated physician AI competencies across knowledge, skills, attitudes

Education Stages:

Stage Priority Key Challenge
UME (Medical School) Foundational AI literacy, critical appraisal Crowded curriculum, faculty unfamiliarity
GME (Residency) Specialty-specific tools, de-skilling prevention Balancing AI efficiency with skill development
CME (Practice) Practical implementation, regulatory awareness Time constraints, varying baseline knowledge

Critical Concerns:

  • De-skilling risk: Trainees who learn exclusively with AI assistance may lack independent diagnostic skills when AI fails
  • Faculty gap: Most medical educators have no formal AI training
  • Assessment gap: No validated tools for measuring AI competency in clinical contexts

The Clinical Bottom Line:

  • AI education must preserve independent clinical reasoning, not replace it
  • Staged competency development: foundational skills first, then AI integration
  • Faculty development is prerequisite to curricular change
  • International frameworks provide models, but local adaptation is essential

Introduction

The integration of AI into clinical practice creates an educational imperative: physicians must learn to work effectively with AI tools they did not encounter during training. This challenge spans the educational continuum from undergraduate medical education (UME) through graduate medical education (GME) to continuing medical education (CME) for practicing physicians.

The evidence base for AI medical education remains limited. A comprehensive scoping review identified 310 publications addressing AI in medical education, but methodological quality varied substantially, and most studies preceded the large language model revolution of 2022-2023 (Simoni et al., 2025). Educational frameworks are emerging, but consensus on core competencies, optimal integration timing, and assessment methods remains elusive.

Three fundamental tensions shape AI medical education:

  1. Efficiency vs. skill development: AI tools improve diagnostic efficiency, but training with constant AI assistance may prevent development of independent clinical reasoning

  2. Curricular space: Medical curricula are already overcrowded. Adding AI competencies requires either displacing existing content or integrating AI into existing courses

  3. Faculty readiness: Most medical educators completed training before clinical AI deployment. They cannot teach what they have not learned

This chapter examines evidence-based approaches to AI medical education across the training continuum, drawing on recent consensus frameworks while acknowledging substantial gaps in implementation evidence.


Part 1: The Educational Imperative

Why AI Education Cannot Wait

Physicians entering practice today encounter AI in multiple clinical contexts:

  • Clinical decision support: Sepsis prediction, deterioration alerts, drug interaction warnings
  • Diagnostic imaging: CAD systems in radiology, pathology, ophthalmology
  • Documentation: Ambient AI scribes, auto-generated notes, coding suggestions
  • Information retrieval: LLM-based literature search, clinical question answering

Without formal training, physicians develop ad hoc approaches to AI, ranging from uncritical acceptance to blanket rejection. Neither extreme serves patients. Evidence shows that AI can improve diagnostic accuracy when used appropriately but introduces new error modes when users lack understanding of system limitations (Wong et al., 2021).

The Competency Gap

A 2025 report from the Josiah Macy Jr. Foundation identified five domains essential for AI competency in medical education (Boscardin et al., 2025):

  1. Foundational Concepts: Understanding how AI systems work, including machine learning basics, training data, and performance metrics
  2. Ethical Reasoning: Navigating bias, privacy, transparency, and accountability in AI-assisted care
  3. Human-AI Collaboration: Developing effective teamwork between clinicians and AI systems
  4. Workflow Integration: Implementing AI tools within clinical environments
  5. Practice-Based Learning: Continuously evaluating and improving AI use through outcomes monitoring

Most current medical education addresses none of these domains systematically. Surveys consistently show that medical students and residents feel unprepared for AI-integrated practice, citing lack of formal training as the primary barrier.

The Publication Surge

Interest in AI medical education has grown rapidly. The scoping review by Simoni et al. documented a dramatic publication increase following ChatGPT’s release in November 2022: 52% of all AI medical education publications appeared in the 18 months after LLM availability (Simoni et al., 2025). This surge reflects both genuine educational need and publication opportunism, making critical appraisal of the literature essential.


Part 2: Undergraduate Medical Education (UME)

Current Landscape

Medical schools face mounting pressure to incorporate AI into curricula. The Liaison Committee on Medical Education (LCME) has not yet mandated specific AI competencies, but accreditation standards requiring graduates to “recognize limitations of their knowledge” implicitly extend to AI literacy.

Existing Integration Models:

Model Description Advantages Challenges
Standalone Course Dedicated AI curriculum (elective or required) Comprehensive coverage, faculty specialization Siloed knowledge, curricular space competition
Integrated Threads AI woven into existing courses (physiology, clinical skills) Contextual learning, efficient use of time Requires faculty-wide training, inconsistent depth
Clinical Immersion AI exposure during clerkships with supervised use Real-world application, immediate relevance Dependent on clinical site AI deployment, variable exposure
Simulation-Based AI scenarios in simulation center Safe practice environment, standardized exposure Resource-intensive, may lack authenticity

Integration Barriers

Faculty Preparedness:

The primary barrier to AI education in medical schools is faculty unfamiliarity. A survey of medical school faculty found that fewer than 15% had received any formal AI training, and fewer than 5% felt prepared to teach AI concepts (Simoni et al., 2025).

Curricular Crowding:

Medical school curricula already contain more content than students can master. Adding AI education without displacing other material risks superficial treatment. Integration into existing courses (anatomy faculty discussing AI in imaging, pathology faculty discussing computational pathology) spreads the burden but requires coordinated faculty development.

Assessment Challenges:

No validated assessment tools exist for AI competency in medical students. Existing frameworks assess knowledge (multiple choice questions on AI concepts) but not performance (appropriate use of AI in clinical scenarios). The FACETS framework (see Part 5) provides assessment taxonomy but requires implementation and validation.

The LLM Challenge in Medical Education

Large language models create new educational challenges and opportunities:

Opportunities:

  • Personalized tutoring and practice questions
  • Clinical reasoning partners for case discussions
  • Literature synthesis for learning
  • Writing assistance for reports and applications

Threats:

  • Academic integrity concerns (LLM-generated assignments)
  • Over-reliance preventing deep learning
  • Hallucinated medical information
  • Reduced development of writing skills

Medical schools are developing LLM policies ranging from prohibition to required use with disclosure. The emerging consensus favors integration with transparency: students may use LLMs as learning tools but must disclose use and verify outputs.


Part 3: Graduate Medical Education (GME)

The ACGME Landscape

The Accreditation Council for Graduate Medical Education (ACGME) has not issued formal AI competency requirements, but existing milestones implicitly encompass AI:

  • Practice-Based Learning and Improvement: Requires use of technology for practice improvement
  • Systems-Based Practice: Requires understanding of healthcare systems, including technology
  • Medical Knowledge: Requires application of biomedical sciences, including computational tools

Several specialty societies have begun developing AI-specific milestones, with radiology and pathology leading.

Specialty-Specific Integration

AI integration varies dramatically by specialty, reflecting differences in tool maturity and workflow impact:

Specialty AI Integration Status Key Educational Priorities
Radiology Mature (950+ FDA devices) CAD integration, de-skilling prevention, AI triage
Pathology Growing (computational pathology) Digital slide analysis, quantitative assessment
Cardiology Moderate (ECG AI, echo quantification) Wearable AI, automated measurements
Emergency Medicine Emerging (sepsis, deterioration) Critical appraisal, workflow integration
Primary Care Growing (documentation, screening) LLM documentation, risk stratification
Surgery Nascent (robotics, imaging guidance) Intraoperative AI, outcome prediction

The De-Skilling Problem

A critical concern in GME is cognitive de-skilling: residents trained with constant AI assistance may lack independent diagnostic skills when AI fails or is unavailable. Evidence from radiology and gastroenterology supports this concern.

Evidence from Radiology:

Studies show that radiologists trained with computer-aided detection (CAD) from early residency demonstrate weaker independent interpretation skills compared to those trained without CAD. When AI is removed, performance drops significantly. This pattern is documented in the radiology chapter’s de-skilling section.

Gastroenterology Evidence:

A study on polyp detection AI found that physicians using AI assistance became progressively worse at independent polyp detection over time. The AI functioned as a cognitive crutch, and clinicians offloaded pattern recognition rather than developing expertise.

Mitigating De-Skilling in GME

Training programs must balance AI proficiency with independent skill development:

Staged Competency Model:

Training Phase AI Exposure Rationale
PGY-1 (Intern) Minimal AI Build foundational skills without AI crutch
PGY-2 Gradual introduction AI as second-reader after independent assessment
PGY-3+ Full integration AI as collaborative tool with maintained override skills

Practical Strategies:

  1. AI-free rotations: Dedicated blocks where trainees interpret without AI, especially for foundational rotations

  2. Interpret-then-compare: Trainees form independent assessment before viewing AI output, documenting reasoning

  3. AI failure case conferences: Monthly review of cases where AI failed, building recognition of AI limitations

  4. Competency gating: Assessment of independent skills before AI privileges, similar to procedure credentialing

GME Program Director Responsibilities

Program directors face new responsibilities for AI education:

  • Curriculum development: Ensuring AI content addresses specialty-specific needs
  • Faculty development: Preparing attendings to teach AI concepts
  • Assessment: Developing and implementing AI competency evaluation
  • Workflow design: Structuring AI exposure to prevent de-skilling
  • Documentation: Tracking AI-related competency milestones

Part 4: Continuing Medical Education (CME)

The Practicing Physician Challenge

Practicing physicians face unique AI education challenges:

  • Time constraints: Limited availability for additional education
  • Variable baseline: Ranging from no AI experience to daily AI use
  • Immediate application: Need practical skills, not theoretical foundations
  • Regulatory requirements: Evolving malpractice and compliance considerations

CME Framework for AI

Essential CME Topics:

Topic Content Urgency
AI Tools in Your Specialty Overview of FDA-cleared and emerging tools High
Critical Appraisal Evaluating AI performance claims High
Liability and Documentation Legal requirements for AI use High
Workflow Integration Practical implementation strategies Medium
Patient Communication Explaining AI to patients Medium
Emerging Technologies Future AI capabilities Low

Delivery Models

Effective CME Formats:

  • Case-based modules: AI decision points embedded in clinical cases
  • Simulation exercises: Hands-on practice with AI tools in safe environment
  • Conference workshops: Specialty-specific AI sessions at annual meetings
  • Online self-paced: Accessible, flexible, but requires self-discipline
  • Institutional training: Mandatory training for new AI tool deployment

The Documentation Imperative

For practicing physicians, AI documentation requirements are immediate and practical:

  1. When to document AI use: Any AI recommendation that influences clinical decision
  2. What to document: AI output, your interpretation, rationale for following or overriding
  3. Override documentation: Particularly important when disagreeing with AI

Example Documentation:

“AI decision support flagged high sepsis risk (92% probability). Clinical assessment: patient afebrile, hemodynamically stable, WBC 11.2, lactate 1.4. AI recommendation not followed; clinical presentation inconsistent with sepsis. Plan: continue monitoring, repeat labs in 6 hours.”

This documentation protects against liability claims in either direction: for following AI that proved wrong, or for overriding AI that proved correct.


Part 5: Competency Assessment

The 23 AI Competencies

A 2022 Delphi consensus study established 23 core competencies for physicians using AI in clinical practice, organized across knowledge, skills, and attitudes (Caliskan et al., 2022):

Knowledge Domain (8 competencies):

Competency Description
K1 Understand basic AI/ML concepts (supervised, unsupervised, reinforcement learning)
K2 Understand data requirements (training, validation, test sets)
K3 Understand performance metrics (sensitivity, specificity, AUC, calibration)
K4 Understand limitations (overfitting, distribution shift, adversarial attacks)
K5 Understand ethical considerations (bias, fairness, transparency)
K6 Understand regulatory frameworks (FDA, CE marking, liability)
K7 Understand data privacy requirements (HIPAA, GDPR, consent)
K8 Understand specialty-specific AI applications

Skills Domain (9 competencies):

Competency Description
S1 Interpret AI outputs in clinical context
S2 Recognize AI failure modes and limitations
S3 Integrate AI recommendations with clinical judgment
S4 Communicate AI role to patients appropriately
S5 Document AI use in medical records
S6 Evaluate AI tools for clinical adoption
S7 Participate in AI governance and oversight
S8 Monitor AI performance post-deployment
S9 Override AI appropriately when indicated

Attitudes Domain (6 competencies):

Competency Description
A1 Maintain appropriate skepticism toward AI claims
A2 Embrace continuous learning as AI evolves
A3 Prioritize patient safety over efficiency gains
A4 Advocate for equitable AI that reduces disparities
A5 Support transparent AI development and deployment
A6 Accept responsibility for AI-assisted decisions

The FACETS Framework

The BEME Guide No. 84 introduced the FACETS framework for evaluating AI educational interventions in healthcare (Gordon et al., 2024):

Dimension Question Assessment Focus
Fidelity Does the intervention match intended design? Implementation quality
Acceptability Do learners and faculty accept the intervention? User experience, satisfaction
Cost What resources does the intervention require? Feasibility, scalability
Effectiveness Does the intervention achieve learning outcomes? Knowledge, skills, behavior change
Transferability Does learning transfer to clinical practice? Real-world application
Sustainability Can the intervention be maintained long-term? Institutional capacity

Assessment Methods

Current Assessment Tools:

Method What It Measures Limitations
Multiple choice Knowledge recall Does not assess application
Case vignettes Theoretical decision-making Artificial context
Simulation Performance in controlled environment Resource-intensive, may lack authenticity
OSCE stations Standardized clinical performance Limited by scenario design
Workplace assessment Real-world performance Dependent on clinical AI availability
Portfolio Reflective practice Subjective evaluation

Recommended Multi-Modal Assessment:

  1. Knowledge assessment: Written examination on AI concepts (K1-K8)
  2. Simulation assessment: AI-integrated scenarios testing appropriate use (S1-S5)
  3. Workplace observation: Attending evaluation of AI integration (S1-S9)
  4. Reflective portfolio: Documentation of AI learning and challenges (A1-A6)

Gaps in Assessment

No validated, widely-adopted assessment tool exists for AI competency in clinical practice. The Delphi competencies provide framework, but operationalization requires:

  • Standardized case libraries with AI decision points
  • Rubrics for evaluating appropriate AI use
  • Benchmarks for competency levels by training stage
  • Methods for assessing attitude and professional identity formation

Part 6: Faculty Development

The Faculty Gap

Medical educators cannot teach what they do not know. Surveys consistently show:

  • Fewer than 15% of medical faculty have received formal AI training
  • Fewer than 5% feel prepared to teach AI concepts
  • Most faculty learned about AI through media coverage, not formal education

This gap threatens curricular implementation: even well-designed AI curricula fail if faculty cannot deliver them effectively.

Faculty Development Framework

Tier 1: AI Literacy (All Faculty)

Essential for all clinical educators:

  • Basic AI concepts (what is machine learning, how do clinical decision support systems work)
  • Limitations and failure modes
  • Ethical and legal considerations
  • How to discuss AI with learners

Tier 2: AI Integration (Course Directors, Clerkship Directors)

For faculty designing and implementing curricula:

  • Curricular design for AI content
  • Assessment of AI competencies
  • Integration with existing courses
  • Simulation and case development

Tier 3: AI Expertise (Specialty Champions)

For faculty leading institutional AI education:

  • Deep technical knowledge
  • Research in AI medical education
  • Institutional governance participation
  • External collaboration and advocacy

Delivery Models for Faculty Development

Format Advantages Best For
Grand rounds Reaches many faculty, low time commitment Tier 1 awareness
Workshops Hands-on practice, discussion Tier 1-2 skill building
Online modules Flexible, asynchronous Tier 1 foundational knowledge
Fellowships Deep expertise development Tier 3 champions
Learning communities Peer support, iterative improvement All tiers, ongoing development
Industry partnerships Access to tools and expertise Tier 2-3 practical skills

Institutional Strategies

Create incentives for AI education:

  • Protected time for AI curriculum development
  • Promotion credit for AI teaching innovation
  • Funding for AI education research
  • Recognition through teaching awards

Leverage external resources:

  • Professional society educational materials
  • Vendor training on specific tools
  • Collaboration with computer science and engineering faculty
  • External courses and certificates

Part 7: International Perspectives

United Kingdom

The UK has taken a coordinated approach to AI medical education through Health Education England (HEE) and the NHS Topol Review.

Topol Review (2019, updated 2022):

  • Called for AI to be core to health professional education
  • Recommended competency frameworks across professions
  • Proposed “digital ready” workforce by 2040

Current Implementation:

  • AI modules in undergraduate medical curricula
  • Postgraduate AI training requirements emerging
  • NHS AI Lab providing educational resources
  • Faculty development through HEE programs

European Union

The EU AI Act (2024) creates regulatory context that shapes educational requirements.

Educational Implications:

  • High-risk classification for medical AI requires trained users
  • Transparency requirements necessitate clinician understanding
  • Documentation requirements demand specific competencies
  • CE marking process increasingly references user training

Country Variations:

Country Status Notable Features
Germany Emerging University-led initiatives, industry partnerships
France Emerging National strategy includes education component
Netherlands Advanced Early adoption, research leadership
Nordic countries Advanced Strong digital health infrastructure

Asia-Pacific

Singapore:

  • AI Singapore initiative includes healthcare education
  • Duke-NUS integrating AI across curriculum
  • National AI strategy includes workforce development

South Korea:

  • Rapidly expanding AI medical education
  • Strong technology infrastructure enabling integration
  • Government-funded educational initiatives

Japan:

  • Aging population driving AI adoption
  • Medical society-led educational initiatives
  • Challenges with faculty preparedness similar to U.S.

Australia/New Zealand:

  • Royal Australian and New Zealand College of Radiologists (RANZCR) leading in radiology AI education
  • Multi-society AI statements (see radiology chapter)
  • Rural health focus for AI deployment

Low- and Middle-Income Countries (LMICs)

AI medical education in LMICs faces distinct challenges:

  • Infrastructure limitations: Inconsistent technology access
  • Faculty scarcity: Fewer trained educators, competing priorities
  • Relevance questions: AI developed in high-income countries may not transfer
  • Opportunity: Leapfrogging with appropriate technology

Promising approaches:

  • Mobile-first educational platforms
  • AI tools designed for LMIC contexts
  • South-South collaboration and knowledge sharing
  • Integration with existing telemedicine initiatives

Clinical Scenarios

Case: Third-year medical student on internal medicine clerkship uses ChatGPT to generate differential diagnoses for every patient. When asked to explain pathophysiology during rounds, the student struggles to articulate reasoning, relying on memorized outputs rather than understanding.

Attending observation: The student provides comprehensive differentials but cannot engage in Socratic discussion about mechanism, cannot prioritize diagnoses based on clinical likelihood, and becomes uncertain when asked follow-up questions that require synthesis.

Discussion:

What is happening?

The student is using the LLM as a cognitive crutch rather than a learning tool. Instead of developing differential diagnosis skills through deliberate practice (generating differentials, receiving feedback, refining approach), the student outsources the cognitive work to AI.

Why is this problematic?

  1. Skill development: Clinical reasoning requires repeated practice. Outsourcing prevents the pattern recognition and knowledge organization that define clinical expertise.

  2. Verification inability: Without understanding pathophysiology, the student cannot verify LLM outputs for accuracy or appropriateness.

  3. Transfer failure: When LLM is unavailable (oral exams, bedside without device), the student lacks functional competency.

  4. Professional identity: Medicine requires independent judgment. Dependence on AI early in training may prevent development of professional autonomy.

Educational intervention:

  1. Explicit expectations: Clarify that LLMs may be used for learning (studying concepts) but not for clinical reasoning tasks that students are expected to develop

  2. Process transparency: Require students to show their reasoning process, not just conclusions

  3. LLM-free assessments: Evaluate clinical reasoning without AI access to establish baseline competency

  4. Constructive use: Guide appropriate LLM use (explaining concepts, practice questions) vs. inappropriate use (generating clinical assessments)

  5. Metacognitive discussion: Help student recognize the difference between having an answer (LLM output) and understanding the answer (clinical reasoning)

Case: Fourth-year radiology resident has read chest X-rays with CAD assistance throughout training. Program director notes that during independent reading sessions (CAD disabled for competency assessment), the resident misses significantly more findings than peers who trained with CAD-free rotations in PGY-2.

Performance data:

Condition Resident (CAD-trained) Peers (Mixed training)
Independent sensitivity 72% 89%
CAD-assisted sensitivity 94% 95%
False positive rate (independent) 18% 8%

Discussion:

What happened?

The resident developed automation complacency: passive monitoring of AI outputs rather than active diagnostic reasoning. When AI is removed, the resident lacks the independent pattern recognition that should have developed during training.

Root cause analysis:

  1. No foundational period: Resident used CAD from PGY-1, never developed independent reading skills

  2. Workflow design: CAD output visible concurrent with images, creating anchoring before independent assessment

  3. Assessment gap: No formal testing of independent skills until PGY-4

  4. Lack of AI-failure education: Resident never systematically reviewed cases where CAD failed

Remediation approach:

  1. CAD-free remediation rotation: 4-week intensive independent reading with expert feedback

  2. Interpret-then-compare protocol: Form impression before viewing CAD, document reasoning

  3. CAD failure case review: Systematic exposure to AI failure modes

  4. Competency gating: Demonstrate independent proficiency before resuming CAD use

  5. Ongoing assessment: Quarterly CAD-free competency checks

Program-level changes:

  • PGY-1-2: Minimal CAD exposure, focus on foundational skills
  • PGY-3+: CAD as second-reader with documented independent assessments
  • All years: AI failure case conferences monthly

Case: You are founding curriculum dean for a new medical school opening in 2027. The school leadership has committed to “AI-integrated education from day one.” You must design the AI curriculum component.

Constraints:

  • 4-year MD program, LCME accreditation required
  • Faculty recruited from traditional medical schools (limited AI expertise)
  • Budget for technology but not unlimited
  • Regional clinical affiliates have varying AI deployment

Curriculum Design Framework:

Phase 1: Pre-Clinical Years (Years 1-2)

Year 1 (Foundations):

  • Module 1: Introduction to AI in Healthcare (8 hours)
    • What is AI/ML, basic concepts
    • Current applications overview
    • Why physicians need AI literacy
  • Integrated content:
    • Biostatistics: AI performance metrics (sensitivity, specificity, AUC)
    • Ethics: Algorithmic bias, consent, accountability
    • Anatomy: AI in imaging (brief exposure)
  • Assessment: Knowledge-based examination

Year 2 (Applications):

  • Module 2: Clinical AI Tools (12 hours)
    • Clinical decision support systems
    • Imaging AI (radiology, pathology)
    • Documentation AI
    • Critical appraisal of AI studies
  • Integrated content:
    • Pharmacology: Drug interaction AI
    • Pathophysiology: Risk prediction models
    • Epidemiology: AI in public health
  • Assessment: Case-based evaluation with AI decision points

Phase 2: Clinical Years (Years 3-4)

Year 3 (Clerkships):

  • AI exposure during rotations: Supervised use of clinical AI

  • Clerkship-specific objectives:

    • Medicine: Sepsis prediction, deterioration algorithms
    • Surgery: Risk stratification, imaging AI
    • Pediatrics: Growth prediction, screening tools
    • Ob/Gyn: Fetal monitoring AI, documentation
  • Longitudinal thread: Monthly AI case discussions across clerkships

  • Assessment: Workplace-based assessment of AI integration

Year 4 (Acting Internship/Electives):

  • Capstone project: AI evaluation or implementation project

  • Elective: Advanced AI in [specialty] (specialty-specific)

  • Preparation for residency: Documentation practices, liability awareness

  • Assessment: Portfolio demonstrating AI competency

Faculty Development Plan:

  • Year -1 (before opening): Intensive faculty AI training
  • Ongoing: Monthly faculty development sessions
  • Champions: Recruit 2-3 faculty with AI expertise for leadership

Technology Requirements:

  • Simulation center with AI-integrated scenarios
  • Access to clinical AI tools for demonstration
  • LLM policy and approved tools for educational use

Success Metrics:

  • Student competency on validated AI assessments
  • Faculty confidence in teaching AI
  • Graduate preparedness surveys
  • LCME accreditation without AI-related concerns

The Path Forward

AI medical education remains in its early stages. Key priorities for the field:

Research Needs:

  • Validated assessment tools for AI competency
  • Longitudinal studies of AI-trained physicians’ performance
  • Comparative effectiveness of educational models
  • De-skilling prevention strategies

Implementation Priorities:

  • Faculty development at scale
  • Integration with accreditation standards
  • Specialty-specific competency frameworks
  • CME for practicing physicians

Policy Advocacy:

  • LCME guidance on AI competencies
  • ACGME milestones for AI in specialty training
  • CME requirements for AI tool adoption
  • Funding for AI medical education research
Key Takeaways for Medical Educators
  1. AI education must preserve independent clinical reasoning, not replace it with AI dependence

  2. Staged competency development: Build foundational skills before integrating AI tools

  3. Faculty development is prerequisite: Invest in training the trainers before curricular change

  4. De-skilling is real: Design training to prevent automation complacency

  5. Assessment remains underdeveloped: Multi-modal approaches needed, validated tools lacking

  6. International frameworks provide models: Adapt to local context while learning from global experience

  7. The curriculum is already crowded: Integration into existing courses more feasible than standalone additions

  8. LLMs change everything: Policies for appropriate educational use are essential

  9. CME cannot wait: Practicing physicians need training now, not when curricula are perfected

  10. Equity matters: AI education should address disparities, not entrench them


Additional Resources

Key Publications:

Professional Society Resources:

  • AMA Digital Health Initiative: AI curriculum resources
  • AAMC Core Entrustable Professional Activities (EPAs): AI integration guidance
  • Specialty societies: See individual specialty chapters for society-specific resources

Cross-references: