30  Healthcare Policy and AI Governance

TipLearning Objectives

Effective AI governance requires updated policies, regulations, and institutional frameworks. This chapter examines policy landscape and governance models. You will learn to:

  • Understand the evolving FDA regulatory framework for AI/ML-based medical devices
  • Evaluate international regulatory approaches (EU, WHO, other regions)
  • Recognize reimbursement challenges and evolving payment models for AI
  • Assess institutional governance frameworks for safe AI deployment
  • Navigate liability, accountability, and legal frameworks for medical AI
  • Understand policy recommendations from professional societies and expert groups
  • Advocate for evidence-based, patient-centered AI regulation

Essential for healthcare administrators, policymakers, and physician leaders.

The Regulatory Challenge:

Traditional medical device regulation assumes static products—approved once, unchanged thereafter. AI systems learn, adapt, and evolve continuously. Regulators worldwide are adapting frameworks to address this reality while maintaining safety standards.

Key Regulatory Developments:

  • FDA (U.S.): Evolving from device-by-device review to “predetermined change control plans” allowing continuous updates. 500+ AI/ML devices cleared/approved, but adaptive learning still limited.
  • EU: Medical Device Regulation (MDR) and AI Act create comprehensive requirements for AI safety, transparency, and accountability.
  • WHO: Developing global guidance on AI ethics, governance, and evaluation to harmonize international approaches.

Major Policy Issues:

  1. Regulation: Balancing innovation vs. safety; addressing continuously learning algorithms
  2. Reimbursement: Payers slow to cover AI; unclear value propositions for many tools
  3. Liability: Who is responsible when AI errs—developer, hospital, physician?
  4. Data governance: Privacy, security, ownership in AI training and deployment
  5. Equity: Ensuring AI benefits all populations, not just data-rich groups

The Path Forward: Effective AI governance requires collaboration among regulators, clinicians, patients, developers, and payers—prioritizing patient safety, evidence-based validation, and equitable access.

30.1 Introduction

Medicine operates within complex regulatory and policy frameworks—FDA device approvals, CMS reimbursement decisions, state medical board oversight, institutional protocols. These structures emerged to protect patients from unsafe drugs, devices, and practices. They assume products are static: a drug approved in 2020 is the same drug in 2025.

AI challenges this assumption. Machine learning systems evolve—retrained on new data, updated algorithms, performance drift over time. How should regulators approve systems that change continuously? How should payers value AI when clinical benefit unclear? Who is liable when AI errs—developers who built it, hospitals that deployed it, or physicians who followed its recommendations?

This chapter examines the policy and regulatory landscape for medical AI—current frameworks, ongoing debates, and future directions. It’s intended for physicians who will shape AI governance through advocacy, institutional leadership, and clinical practice.


30.2 FDA Regulation of AI/ML-Based Medical Devices

The FDA regulates medical devices, including software. AI/ML-based medical devices (algorithms, apps, clinical decision support) fall under FDA jurisdiction if they diagnose, treat, prevent, or mitigate disease.

30.2.1 Traditional FDA Device Regulation

Three regulatory pathways (pre-AI):

  1. 510(k) clearance: Device is “substantially equivalent” to predicate device already on market. Fastest, least burdensome pathway. Most medical devices cleared via 510(k).

  2. Premarket approval (PMA): Rigorous review requiring clinical trials demonstrating safety and effectiveness. Reserved for high-risk devices (implantable defibrillators, heart valves).

  3. De novo classification: New device type with no predicate. Establishes new regulatory pathway for similar future devices.

For traditional devices: Approved once, locked down—no changes without new FDA submission.

For AI/ML devices: This model breaks down. AI systems updated frequently (weekly, monthly) to improve performance, address drift, expand capabilities. Re-submitting to FDA for every update impractical.

30.2.2 FDA’s Action Plan for AI/ML-Based Medical Devices

Problem recognized: FDA published “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning-Based Software as a Medical Device” (2019, updated 2021).

Key concepts:

  1. Software as a Medical Device (SaMD): Software intended for medical purposes, functioning independently of hardware. AI algorithms fit this definition.

  2. Predetermined Change Control Plan (PCCP): Manufacturer specifies types of changes anticipated (e.g., retraining on new data, expanding to new populations, improving algorithm) and how changes will be validated. FDA reviews and approves plan—then allows specified changes without new submissions.

  3. Real-world performance monitoring: Manufacturers must monitor AI performance post-deployment, report adverse events, and demonstrate continued safety/effectiveness.

  4. Good Machine Learning Practice (GMLP): FDA, Health Canada, UK MHRA collaboratively developed 10 guiding principles for high-quality ML development and deployment.

30.2.3 Current State of FDA AI Regulation

As of 2024: - 500+ AI/ML-enabled medical devices cleared or approved - Most are “locked” algorithms (don’t learn or adapt after deployment) - Few devices with FDA-approved PCCPs allowing continuous learning - Regulatory pathways for adaptive AI still evolving

Examples of FDA-cleared AI: - Radiology CAD systems (intracranial hemorrhage, pulmonary embolism, lung nodules) - Diabetic retinopathy screening (IDx-DR—first autonomous AI diagnostic cleared by FDA) - ECG analysis (AFib detection, QT interval measurement) - Sepsis prediction algorithms (limited evidence, some controversial)

Challenges: - Validation standards unclear: What evidence required for AI approval? Retrospective studies sufficient? Prospective trials needed? - Performance monitoring post-deployment: How to detect AI drift, degradation, harm? - Generalizability: AI approved based on one population may fail in others—how to ensure safety? - Transparency vs. trade secrets: FDA requires transparency, manufacturers claim proprietary algorithms


30.3 International Regulatory Approaches

30.3.1 European Union: Medical Device Regulation and AI Act

EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR): Replaced earlier directives (2017, full enforcement 2021). Stricter requirements for safety, clinical evidence, post-market surveillance.

AI-specific considerations: - Software classified by intended use and risk level (Class I lowest risk → Class III highest) - High-risk AI (e.g., autonomous diagnostic systems) requires rigorous clinical evaluation, CE marking - Manufacturers must demonstrate safety, performance, and clinical benefit

EU AI Act (2024): First comprehensive AI regulation globally. Categorizes AI by risk: - Unacceptable risk: Banned (e.g., social scoring, subliminal manipulation) - High risk: Includes healthcare AI—strict requirements (transparency, human oversight, robustness, accuracy) - Limited risk: Transparency requirements (users informed when interacting with AI) - Minimal risk: No specific obligations

Impact on medical AI: - High compliance burden for developers - Emphasis on transparency, explainability, accountability - Post-market monitoring, reporting requirements - Patients have rights to understand and contest AI decisions

30.3.2 WHO: Global Guidance on AI in Health

WHO approach: Convene global stakeholders, develop ethical and regulatory guidance, support national capacity-building.

Key documents: - “Ethics and Governance of Artificial Intelligence for Health” (2021): 6 principles (protect autonomy, promote well-being, ensure transparency, foster responsibility, ensure equity, promote sustainable AI) - “Guidance on AI in Health” (ongoing): Practical frameworks for evaluation, regulation, implementation

Goal: Harmonize international approaches, prevent “regulatory arbitrage” (seeking least-restrictive jurisdictions).

30.3.3 Other Regions

Canada Health Products and Food Branch (Health Canada): - Collaborating with FDA, UK MHRA on Good Machine Learning Practice - Developing adaptive licensing frameworks for continuously learning AI

Australia Therapeutic Goods Administration (TGA): - Software-as-Medical-Device framework similar to FDA - Risk-based classification

Asia (Japan, South Korea, Singapore, China): - Varied approaches, generally following risk-based frameworks - China aggressively regulating AI (national standards, data localization requirements)


30.4 Reimbursement and Payment Models for AI

Regulatory approval necessary but insufficient for AI adoption. Reimbursement drives clinical deployment—if payers don’t cover AI, providers won’t use it.

30.4.1 Current Reimbursement Landscape

Challenge: Most AI lacks dedicated reimbursement codes. Payers reluctant to cover tools without clear evidence of clinical benefit, cost-effectiveness.

Payer perspectives: - Medicare/Medicaid (U.S.): Slow to establish new codes. Requires evidence of “reasonable and necessary” for coverage. - Private payers: Vary widely. Some cover AI as part of existing services (e.g., radiology reads), others deny as “experimental.” - International payers: Varies by country—single-payer systems may negotiate national pricing, others leave to regional/local decisions.

30.4.2 Examples of Reimbursement

1. IDx-DR (Diabetic Retinopathy Screening): - FDA-cleared: 2018 (first autonomous AI diagnostic) - CPT code established: 2020 (92229—imaging with AI interpretation without physician review) - Medicare coverage: Yes, though uptake varies - Significance: Rare example of AI with dedicated reimbursement

2. Radiology AI: - Most systems: No separate payment—incorporated into radiologist professional fee - Hospital perspective: Investment in AI to improve efficiency, reduce costs (faster reads, fewer misses)—not direct reimbursement - Outcome: Slow adoption where ROI unclear

3. Clinical Decision Support: - Most tools: No reimbursement—cost absorbed by hospitals/providers - Rationale: CDSS considered “support,” not separately billable service

30.4.3 Emerging Payment Models

1. Value-Based Care Alignment: - AI that reduces hospitalizations, improves outcomes, lowers costs aligns with value-based contracts - Payers and providers share savings → incentivizes AI adoption

2. Bundled Payments: - AI costs included in bundled payment for episode of care (e.g., stroke treatment, diabetic care) - Incentivizes providers to use cost-effective AI

3. Pay-for-Performance: - Link AI reimbursement to demonstrated outcomes (e.g., reduced missed cancers, faster stroke treatment)

4. Technology Assessment and Comparative Effectiveness Research: - Organizations like ICER (Institute for Clinical and Economic Review) evaluate AI cost-effectiveness - Findings influence payer coverage decisions


30.5 Institutional Governance: Hospital and Health System Policies

Beyond FDA and payers, individual institutions must govern AI deployment—deciding what to purchase, how to integrate, and how to monitor.

30.5.1 Components of Institutional AI Governance

1. AI Oversight Committee: - Multidisciplinary: clinicians, IT, legal, ethics, administration, patients - Reviews proposed AI tools before procurement - Monitors deployed AI for safety, effectiveness, equity

2. Clinical Validation Requirements: - Independent validation on institution’s patient population before deployment - Prospective monitoring post-deployment (performance metrics, error rates, disparate impact)

3. Transparency and Explainability Standards: - Clinicians understand AI logic, limitations, failure modes - Patients informed when AI used in care - Documentation of AI recommendations in medical record

4. Liability and Accountability Frameworks: - Clear policies on physician responsibility (must AI recommendations be followed? overridden? documented?) - Incident reporting for AI errors or near-misses - Liability insurance coverage for AI-related claims

5. Equity and Bias Monitoring: - Assess AI performance across demographic subgroups (race, ethnicity, sex, age, insurance status) - Mitigate identified biases (algorithm adjustment, human review for high-risk groups, discontinuation if bias unresolvable)

6. Vendor Contracts and Data Governance: - Specify data ownership, usage rights (can vendor use institutional data for further AI development?) - Ensure compliance with privacy regulations (HIPAA, GDPR) - Require vendor transparency on training data, validation evidence, update frequency

30.5.2 Challenges for Institutional Governance

  • Resource constraints: Small hospitals lack dedicated AI governance infrastructure
  • Technical complexity: Non-experts struggle to evaluate AI technical specifications
  • Vendor opacity: Proprietary algorithms, limited transparency
  • Pace of change: Governance structures struggle to keep up with rapid AI evolution

30.7 Policy Recommendations from Expert Bodies

Professional societies, government agencies, and expert panels have issued policy recommendations for AI governance.

30.7.1 AMA Principles for AI Development and Use

American Medical Association (2019): 1. Augment, not replace: AI should support physicians, not substitute for clinical judgment 2. Validation and transparency: Rigorous testing, clear documentation of limitations 3. Equity: Ensure AI benefits diverse populations, mitigate bias 4. Data privacy: Protect patient data, obtain informed consent 5. Accountability: Physicians remain responsible for patient care 6. Continuous monitoring: Detect and address performance degradation

30.7.2 NASEM Report on AI in Health Care

National Academies of Sciences, Engineering, and Medicine (2024): Comprehensive report on AI governance.

Key recommendations: - Establish FDA pathway for continuously learning AI with robust post-market surveillance - Create national registry of deployed AI tools with performance data - Fund research on AI effectiveness, bias, and safety in real-world settings - Develop reimbursement models incentivizing high-value AI - Support workforce training on AI literacy for healthcare professionals

30.7.3 WHO Ethics and Governance Framework

WHO (2021): 6 principles for ethical AI in health.

  1. Protect human autonomy: Patients and providers maintain decision-making authority
  2. Promote human well-being and safety: AI must benefit patients, minimize harm
  3. Ensure transparency, explainability, intelligibility: Understand AI logic and limitations
  4. Foster responsibility and accountability: Clear assignment of responsibility
  5. Ensure inclusiveness and equity: AI accessible to diverse populations, mitigate bias
  6. Promote AI that is responsive and sustainable: Long-term monitoring, adaptation

30.7.4 Recommendations from Specialty Societies

American College of Radiology (ACR): - Data Science Institute publishes AI use cases, validation standards - AI-LAB accreditation program for vendor transparency

American Heart Association: - Statement on predictive algorithms—emphasize need for diverse training data, validation

Society to Improve Diagnosis in Medicine: - Calls for AI to address diagnostic errors, but cautions about over-reliance


30.8 Data Governance and Privacy

AI requires data—for training, validation, deployment, monitoring. Data governance policies determine who can access, use, and benefit from health data.

30.8.1 Privacy Regulations

HIPAA (U.S.): - Protects patient health information, limits use/disclosure - Permits de-identified data for research, machine learning (with safeguards) - Business Associate Agreements required when third-party vendors access PHI

GDPR (EU): - Stricter than HIPAA—requires explicit consent for data processing - “Right to explanation” (patients can request explanation of automated decisions affecting them) - “Right to be forgotten” (request data deletion)—complicates AI trained on patient data

Other jurisdictions: Vary widely—some permissive (facilitates AI development), others restrictive (protects privacy, may limit AI progress)

30.8.2 Challenges for AI Data Governance

1. Data ownership: Who owns patient data—patients, providers, institutions, payers? Determines who can authorize AI training.

2. Consent for secondary use: Patients consent to clinical care; do they consent to data use for AI development?

3. Re-identification risk: De-identified data may be re-identified (especially with genetic, imaging data). How to prevent?

4. Data sharing across institutions: Federated learning, multi-site collaborations require data sharing—how to do securely, equitably?

5. Algorithmic transparency vs. privacy: Explainable AI may reveal sensitive patterns in training data. Balance?


30.9 The Path Forward: Policy Priorities for Medical AI

What policies would best serve patients, clinicians, and society?

30.9.1 1. Evidence-Based Regulation

  • Require prospective validation before widespread deployment (not just retrospective studies)
  • Mandate post-market surveillance: Real-world performance monitoring, adverse event reporting
  • Establish standards for AI validation: What evidence suffices for regulatory approval?

30.9.2 2. Reimbursement Aligned with Value

  • Link payment to outcomes: AI demonstrating clinical benefit, cost-effectiveness should be reimbursed
  • Avoid paying for hype: Don’t cover AI lacking evidence
  • Incentivize equity: Reward AI reducing disparities, penalize AI exacerbating bias

30.9.3 3. Liability Framework Clarifying Responsibilities

  • Manufacturer accountability: Liable for algorithmic defects, inadequate warnings
  • Physician accountability: Liable for negligent use (blindly following AI, ignoring limitations)
  • Shared responsibility models: When AI and physician collaborate, how to apportion liability?

30.9.4 4. Data Governance Balancing Innovation and Privacy

  • Patient rights: Consent for AI training on data, ability to opt-out
  • Data sharing infrastructure: Secure, privacy-preserving methods for multi-institutional AI development
  • Benefit-sharing: When patient data used commercially, how should patients/institutions benefit?

30.9.5 5. Equity and Bias Mitigation

  • Mandate diversity in training data: AI must be validated on populations where deployed
  • Monitor for disparate impact: Require reporting on performance across demographic groups
  • Discontinue biased AI: If bias unresolvable, don’t deploy

30.9.6 6. Workforce Training and AI Literacy

  • Medical education: Integrate AI literacy into curricula (medical school, residency, CME)
  • Clinical guidelines: Professional societies develop standards for AI use in practice
  • Public education: Patients understand AI role in their care, can advocate for themselves

30.10 Conclusion

AI regulation and policy are evolving rapidly—adapting frameworks designed for static products to dynamic, learning systems. Challenges abound: unclear evidence standards, insufficient reimbursement, unsettled liability, complex data governance.

Yet policy must evolve. The alternative—unregulated AI proliferation—risks patient harm, widening disparities, erosion of trust. Physicians have essential role shaping AI governance—as clinicians experiencing AI firsthand, as advocates for patients, as leaders in health systems and professional societies.

Key principles for AI policy: - Patient-centered: Prioritize safety, benefit, equity - Evidence-based: Demand rigorous validation, continuous monitoring - Transparent: Understand AI logic, limitations, failure modes - Accountable: Clear responsibility when AI errs - Equitable: Ensure AI benefits all, not just data-rich populations

Medicine stands at inflection point. The policies adopted in next few years will determine whether AI becomes force for good—improving diagnoses, personalizing treatments, reducing disparities—or force for harm—entrenching biases, profiting vendors at patients’ expense, replacing judgment with algorithms. Physicians must engage in policy debates, advocate for frameworks serving patients first, and ensure AI remains tool, not master.


30.11 References