Preface

AI as a Clinical Tool, Not a Replacement

Medicine has always been about pattern recognition—correlating symptoms with diagnoses, identifying subtle findings that distinguish one condition from another, synthesizing vast amounts of information to make critical decisions under uncertainty.

Artificial intelligence is entering clinical medicine not as a replacement for physician judgment, but as a new category of tool that amplifies our pattern-recognition capabilities in ways previously impossible.

The challenge isn’t whether AI will transform medical practice. It already has. Algorithms interpret chest X-rays and detect pneumonia. Deep learning models analyze pathology slides and identify malignancies. Natural language processing extracts insights from electronic health records. Clinical decision support systems recommend evidence-based treatments.

The real challenge facing physicians is this: How do we critically evaluate these tools, integrate them responsibly into clinical workflow, understand their limitations and failure modes, navigate medico-legal implications, and ultimately decide which AI applications genuinely improve patient care—all while continuing to see patients, manage complex cases, and stay current with traditional medical literature?

This handbook attempts to address that challenge.

What This Handbook Is Not

This is not a computer science textbook. If you want to build neural networks from scratch or understand backpropagation algorithms, excellent technical resources exist elsewhere, and you don’t need that knowledge to use AI tools effectively in clinical practice.

This is not a manifesto claiming AI will solve medicine’s hardest problems. It won’t. Healthcare’s most challenging issues—health inequity, access barriers, burnout, administrative burden, the social determinants of health—are fundamentally systemic problems that no algorithm can fix alone.

This is not a catalog of futuristic possibilities 20 years away. I’ve focused on what exists now, what has peer-reviewed clinical evidence, what physicians are actually using, what works (sometimes), what fails (often), and what you can realistically evaluate for your practice today.

This is not vendor-neutral in a false balance way. Where evidence clearly supports specific tools, I name them. Where tools have failed spectacularly despite marketing hype, I document those failures. Physicians deserve honest assessments based on clinical evidence, not diplomatic neutrality between good and bad applications.

What This Handbook Attempts

I’ve tried to write the resource I needed five years ago when I began encountering AI tools in clinical practice and realized traditional medical education hadn’t prepared me to evaluate them critically.

This handbook attempts to:

  • Translate without oversimplifying — AI is neither magic nor “just statistics”; understanding the middle ground matters
  • Ground everything in clinical evidence — Citations from JAMA, NEJM, The Lancet, Nature Medicine, BMJ, and specialty journals throughout
  • Acknowledge failures as prominently as successes — IBM Watson for Oncology’s failure teaches more than another diabetic retinopathy screening success story
  • Address specialty-specific applications — Pediatrics faces different AI challenges than radiology; both deserve dedicated attention
  • Focus on implementation realities — Technical performance metrics matter less than whether a tool integrates into actual clinical workflow
  • Center patient safety and medical liability — Algorithms don’t face malpractice suits; physicians do

A Note on Evidence and Uncertainty

Medical AI research moves extraordinarily fast. A systematic review published six months ago may be outdated. Regulatory approvals happen quarterly. New foundation models emerge monthly. Clinical trials report results continuously.

I’ve attempted to synthesize current evidence while acknowledging what we don’t yet know. Where consensus exists, I state it clearly. Where evidence conflicts or gaps remain, I acknowledge uncertainty rather than manufacture false confidence.

This handbook prioritizes peer-reviewed evidence from major medical journals, FDA-cleared applications, and real-world clinical implementations over press releases, vendor whitepapers, and theoretical capabilities.

If you find evidence that contradicts what I’ve written—newer studies, clinical trials, systematic reviews—please contribute. Medicine advances through evidence, and this handbook should reflect our best current understanding.

On Reading This Book

You don’t need to read sequentially. Medical specialties differ substantially in AI maturity and applications. Jump directly to your specialty chapter. Use the search function. Read TL;DRs for quick orientation. Dive deep when evaluating tools for your practice.

Treat this as a clinical reference, not a textbook. Return to relevant sections when facing specific decisions. Share chapters with colleagues. Adapt frameworks to your institutional context.

Three reading approaches:

1. The Quick Scan (TL;DRs only): - Read chapter summaries for rapid orientation - Perfect for busy clinicians needing key takeaways - Use when evaluating whether a chapter addresses your immediate question - Best for: Attendings, department chairs, anyone triaging information

2. The Deep Dive (Full chapters): - Read complete chapters for comprehensive understanding - Includes clinical evidence, case studies, implementation details - Use when implementing AI tools or developing institutional policies - Best for: Informaticists, researchers, residency educators, early adopters

3. The Specialty Focus (Your chapters only): - Read foundations (Part I) + your specialty + implementation (Part III) - Most efficient path for specialty-specific guidance - Skip chapters outside your clinical domain - Best for: Practicing physicians in specific specialties

Most importantly: Stay skeptical. Question vendor claims. Demand prospective clinical trials, not just retrospective validation studies. Prioritize patient safety over efficiency gains. Remember that you remain legally and ethically responsible for clinical decisions, regardless of what any algorithm recommends.

How to Use the Chapter Summaries (TL;DRs)

Every chapter begins with an expandable 📋 Chapter Summary (TL;DR) section designed for rapid comprehension without sacrificing clinical accuracy.

What’s Inside Each TL;DR:

  • The Clinical Context: Why this matters for patient care
  • Key Evidence: Major studies and systematic reviews
  • What Actually Works: Evidence-based applications currently in clinical use
  • What Doesn’t Work: Documented failures and limitations
  • Clinical Bottom Line: The essential takeaway for practice
  • Medico-Legal Considerations: Liability and regulatory issues where relevant

Visual Indicators: - ⚠️ Safety concerns and important warnings - ✅ Evidence-supported applications - ❌ Failed implementations and what went wrong - 💡 Key clinical insights - 📊 Important data from trials and studies

The TL;DRs are not superficial summaries. They represent distillation of peer-reviewed literature into clinically actionable guidance. Many physicians have told me they use TL;DRs for 80% of their needs and read full chapters only when implementing specific tools.

That’s intentional. Your clinical time is precious. Use it wisely.

A Personal Note

I began my career in clinical medicine in 1995, during an era when having a personal computer was common but laptops were prohibitively expensive for most young physicians. I watched technology gradually transform medical practice—from paper charts to EHRs, from film radiographs to PACS, from library journals to PubMed on smartphones.

When I first encountered mentions of artificial intelligence in medical journals around 2015, these ideas seemed futuristic—interesting research directions but not immediately relevant to clinical practice. Then in 2023, I experimented with the first release of ChatGPT and immediately realized: This is going to change medicine fundamentally. Clinical practice will not look the same in five years.

This wasn’t about answering clinical questions or synthesizing literature—though AI does both remarkably well. This was about reimagining how we approach differential diagnosis, how we stay current with exploding medical knowledge, how we document patient encounters, how we support clinical decision-making under uncertainty.

I wrote this handbook because I believe physicians deserve evidence-based, honest, practical guidance about AI tools that respects both clinical expertise and the complexity of real patient care.

We stand at an inflection point. The decisions we make now—about which AI tools to adopt, how to evaluate them critically, how to integrate them safely into clinical workflow—will shape medical practice for decades.

Let’s make those decisions based on evidence, not hype.


Bryan Tegomoh, MD, MPH Berkeley, California January 2025

This site uses privacy-friendly analytics (Plausible) to track aggregate page views and improve the handbook. No cookies or personal data are collected.