Appendix A — Quick Reference: All Chapter Summaries (TL;DRs)

How to Use This Appendix

This appendix compiles all chapter TL;DRs (Too Long; Didn’t Read summaries) in one place for rapid reference. Perfect for:

  • Quick review before implementing AI tools
  • Refreshing key concepts
  • Finding specific information across chapters
  • Sharing with colleagues who need executive summaries

Each TL;DR includes: - The clinical context - Key evidence and applications - What works vs. what doesn’t - Critical takeaways

For full details, citations, and implementation guidance, read the complete chapters.


Part I: Foundations

Chapter 1: AI in Medicine - A Brief History

Key Lesson: Technical excellence ≠ clinical adoption (MYCIN, IBM Watson failures)

Major Failures: - MYCIN (1970s): Perfect algorithm, zero clinical use (liability, integration, trust) - IBM Watson Oncology: Unsafe recommendations despite Jeopardy! success - Google Flu Trends: Overestimated flu by 140%, discontinued

What Works: Narrow, well-defined tasks (IDx-DR diabetic retinopathy screening)


Chapter 2: AI Fundamentals for Clinicians

Core Concept: AI learns patterns from data, doesn’t follow explicit rules

Critical Metrics: - PPV depends on disease prevalence (MOST IMPORTANT for clinicians) - AUC useful for comparison but doesn’t tell you clinical utility - Sensitivity vs. Specificity trade-offs

Key Limitations: - Black-box problem (can’t explain reasoning) - Distribution shift (works at Hospital A, fails at Hospital B) - Bias amplification (training data biases → algorithmic biases)


Chapter 3: Clinical Data Challenge

Reality: Clinical data is messy - missingness, heterogeneity, temporal complexity, bias

Critical Issues: - Missing data is NOT random (sicker patients have more data) - EHR data quality variable (copy-paste errors, billing optimization) - External validation essential (internal validation overestimates performance)

Demand: Multi-site external validation, YOUR population validation


Part II: Clinical Specialties

Chapter 4: Radiology

Maturity: Most advanced medical AI specialty (500+ FDA-cleared devices)

Strong Evidence: - Diabetic retinopathy screening (IDx-DR) - FDA-cleared, prospective RCT - ICH detection (Aidoc, Viz.ai) - Reduces notification time - LVO stroke (Viz.ai) - Proven to improve outcomes - Mammography AI (iCAD, Lunit) - Improving cancer detection

“Will AI replace radiologists?” NO - AI augments, doesn’t replace


Chapter 13: Primary Care

Best Applications: - Diabetic retinopathy screening (IDx-DR) - Strongest evidence - Ambient documentation (Nuance DAX) - High physician satisfaction - Chronic disease monitoring (BP, diabetes)

Weak Evidence: - General diagnostic AI (too complex for current systems) - Symptom checkers (30-60% accuracy)

Critical: Workflow integration essential (no time for separate systems in 15-min visits)


Chapter 9: Emergency & Critical Care

Proven Applications: - LVO stroke detection (Viz.ai, RapidAI) - 30-50 min time savings - ICH detection - High sensitivity, reduces notification time - PE detection - Workflow benefits

Controversial: - Epic Sepsis Model - 67% sensitivity at external validation, high false positives - Deterioration prediction - Variable results, implementation-dependent

Challenge: Alert fatigue in high-volume EDs


Part III: Implementation

Chapter 16: Evaluating AI Systems

Evaluation Hierarchy: 1. Vendor whitepaper (weakest) 2. Single-site retrospective study 3. Multi-site external validation 4. Prospective cohort studies 5. Randomized controlled trials (strongest)

20 Essential Questions: See full chapter for complete vendor evaluation checklist

Red Flags: - No peer-reviewed publications - No external validation - Vendor refuses to share performance data - Claims 99%+ accuracy


Part IV: Practical Tools

Chapter 22: Physician AI Toolkit

Top Tier (Highest Evidence): - IDx-DR (diabetic retinopathy) - FDA-cleared, prospective RCT - Viz.ai (stroke, PE) - Proven clinical benefit - Nuance DAX (documentation) - High physician satisfaction

Strong Evidence: - Aidoc (multiple radiology applications) - Paige Prostate (pathology AI) - Arterys/Circle CVI (cardiac MRI)

Avoid: - Unvalidated symptom checkers - Tools without peer-reviewed publications - “AI diagnoses everything” systems


Chapter 23: LLMs in Clinical Practice

What LLMs Can Do: - Literature synthesis - Documentation drafts (WITH REVIEW) - Patient education materials (WITH VERIFICATION) - Differential diagnosis brainstorming

Critical Limitation: Hallucinations (confident but false information)

NEVER: - Enter patient identifiers into public ChatGPT (NOT HIPAA-compliant) - Trust LLM output without verification - Use for urgent/emergent decisions - Replace specialist consultation

Rule: Physician oversight always required


Quick Decision Trees

“Should I Deploy This AI Tool?”

START: Does it have FDA clearance (for diagnostic apps)? - NO → Proceed with extreme caution - YES → Continue

Has it been externally validated? - NO → Do NOT deploy - YES → Continue

Has it been prospectively validated? - NO → Pilot only, close monitoring - YES → Continue

Does it match MY patient population? - NO → Local validation required - YES → Continue

Can I afford false positives? - Calculate expected false alerts - Assess alert fatigue risk - Pilot before full deployment

Decision: Deploy with continuous monitoring


“Should I Trust This LLM Output?”

Is it medical fact (drug dose, diagnosis, treatment)? - YES → VERIFY against authoritative source - NO → Continue

Are lives at stake? - YES → Multiple verification required - NO → Continue

Can I cite the source? - LLM provides citation → CHECK IT (often fabricated) - No citation → Treat as unverified

Decision: Use as draft/idea, never final answer for medical decisions


The Ultimate Clinical Bottom Lines

For All Physicians:

  1. Demand evidence: External + prospective validation minimum
  2. PPV at YOUR prevalence: Most critical metric
  3. Local validation: Test on YOUR data before deployment
  4. Continuous monitoring: Performance drifts over time
  5. Physician oversight always: You remain medically and legally responsible
  6. Start small: Pilot, learn, expand cautiously
  7. Alert fatigue is real: Optimize thresholds carefully
  8. Privacy first: HIPAA-compliant systems only for patient data
  9. Transparency matters: Inform patients about AI-assisted care
  10. Stay informed: Field evolving rapidly

Red Lines (Do NOT Cross):

❌ Deploy AI without validation ❌ Trust vendor claims without verification ❌ Ignore high false positive rates ❌ Skip local pilot testing ❌ Use public LLMs for patient data ❌ Rely on AI for urgent life-threatening decisions without verification ❌ Assume AI works equally for all patient populations


Next Steps

To implement AI safely: 1. Read relevant specialty chapter (Part II) 2. Review evaluation framework (Chapter 16) 3. Check physician toolkit for specific tools (Chapter 22) 4. Assess LLM use cases if applicable (Chapter 23) 5. Plan local pilot with monitoring 6. Document everything 7. Iterate based on real-world performance

Remember: AI is powerful tool, not replacement for clinical judgment. Use wisely, monitor continuously, prioritize patient safety.


For complete details, evidence, and implementation guidance, read the full chapters.