29  AI and Global Health Equity

TipLearning Objectives

AI could reduce global health disparities—or worsen them. This chapter examines AI applications in low-resource settings and equity implications globally. You will learn to:

  • Understand the dual potential of AI: reducing vs. exacerbating health disparities
  • Evaluate AI applications in low- and middle-income countries (LMICs)
  • Recognize infrastructure, data, and resource constraints in global settings
  • Assess telemedicine and AI-enabled remote diagnostics for underserved populations
  • Identify algorithmic bias and its impact on health equity
  • Navigate ethical considerations for AI deployment in resource-limited settings
  • Advocate for equitable AI development and deployment globally

Essential for global health practitioners, policymakers, and equity-minded physicians.

The Equity Challenge:

AI has transformative potential for global health—extending specialist expertise to remote areas, enabling low-cost diagnostics, and addressing physician shortages. Yet current AI development concentrates in high-income countries, trained on Western populations, deployed where resources already abundant.

Key Disparities:

  • Data inequality: LMICs generate far less clinical data; datasets underrepresent diverse populations
  • Infrastructure gaps: AI requires electricity, internet, devices—often absent in resource-limited settings
  • Workforce challenges: Shortage of AI expertise in LMICs; brain drain to high-income countries
  • Economic barriers: High costs of AI development and deployment favor wealthy institutions
  • Algorithmic bias: Models trained on non-representative data perform poorly on underrepresented populations

Promising Applications:

  • Diabetic retinopathy screening in rural India (IDx-DR, Google)
  • Tuberculosis diagnosis from chest X-rays in sub-Saharan Africa
  • Malaria detection from blood smears (smartphone microscopy + AI)
  • Maternal-fetal ultrasound interpretation in low-resource settings
  • SMS-based chatbots for health information and triage

The Path to Equity: Achieving equitable AI in global health requires intentional design for low-resource settings, diverse training data, partnerships with LMIC institutions, and policies prioritizing global health equity over commercial interests.

29.1 Introduction

The global burden of disease falls disproportionately on low- and middle-income countries (LMICs). Over 80% of the world’s population lives in LMICs, yet these regions account for only 20% of global health spending. Physician-to-population ratios are starkly unequal: sub-Saharan Africa has 2 physicians per 10,000 people vs. 30+ in high-income countries.

Could AI help? Optimists envision AI extending specialist expertise to remote clinics, enabling low-cost diagnostics via smartphones, and compensating for workforce shortages. Skeptics warn AI may exacerbate disparities—developed and deployed by wealthy institutions for wealthy populations, with limited applicability to LMIC contexts.

Both perspectives hold truth. AI’s impact on global health equity depends on intentional choices: where development resources flow, whose data trains models, which problems receive attention, and how technologies are deployed. This chapter examines AI’s dual potential—to reduce or widen global health inequities—and charts paths toward equity-centered development.


29.2 The Promise: AI for Low-Resource Settings

29.2.1 Extending Specialist Expertise

Problem: Specialist shortages in LMICs are severe. Example: Sub-Saharan Africa has ~0.1 ophthalmologists per 100,000 people (vs. 7 per 100,000 in the U.S.). Most people with diabetic retinopathy, glaucoma, or cataracts never see a specialist.

AI solution: Autonomous diagnostic AI deployed in primary care clinics, community health centers, or mobile screening programs. Non-specialists capture images (retinal photos, skin lesions, ultrasounds), AI interprets, specialists review only flagged cases.

Examples:

1. Diabetic Retinopathy Screening (India) - Context: India has 77 million people with diabetes, limited ophthalmology workforce - Deployment: Google AI, Aravind Eye Care System partnership—retinal screening in primary care clinics across India - Impact: Thousands screened, referrals to specialists when AI detects referable retinopathy (Gulshan et al. 2016) - Challenge: Ensuring follow-up care for AI-flagged patients (diagnosis without treatment doesn’t improve outcomes)

2. Tuberculosis Detection (Africa) - Context: TB remains leading infectious disease killer; diagnosis requires expert radiology, often unavailable - AI solution: Chest X-ray AI (Qure.ai, Delft Imaging) detects TB-suggestive findings, prioritizes cases for sputum testing or treatment - Deployment: Pilot programs in Kenya, South Africa, India - Evidence: Sensitivity comparable to expert radiologists, faster turnaround than traditional workflows

3. Maternal-Fetal Ultrasound (Global) - Context: Maternal mortality concentrated in LMICs; lack of trained sonographers limits prenatal care - AI solution: AI-guided ultrasound for non-experts—software assists probe positioning, image acquisition, interpretation (fetal growth, placental location, anomalies) - Potential: Extend prenatal ultrasound to rural health posts, community midwives

29.2.2 Low-Cost Diagnostics via Smartphones

Smartphones are ubiquitous—even in low-resource settings. Smartphone-based diagnostics + AI could enable point-of-care testing without expensive lab equipment.

Examples:

1. Malaria Detection - Smartphone microscopy (phone camera + clip-on lens) images blood smears - AI detects malaria parasites - Replaces traditional microscopy requiring trained technicians

2. Anemia Screening - Smartphone photos of fingernails or conjunctiva - AI estimates hemoglobin levels from color - Non-invasive, no lab infrastructure needed

3. Cervical Cancer Screening - Smartphone colposcopy + AI → identify precancerous lesions - Addresses shortage of pathologists for Pap smear interpretation

4. Nutritional Assessment - Smartphone photos of children → AI estimates malnutrition (mid-upper arm circumference, weight-for-height) - Community health workers screen without anthropometric equipment

Limitations: - Requires reliable smartphones (not universally available) - Internet connectivity for cloud-based AI (not reliable in many LMIC settings) - Validation in diverse populations (most models trained on high-income populations)

29.2.3 Telemedicine and Remote Consultation

AI can triage patients for telemedicine consultations, prioritize urgent cases, and assist diagnosis when specialists are remote.

Applications: - Chatbots for symptom assessment: SMS or app-based AI triages patients—urgent vs. routine vs. self-care - AI-assisted radiology: Remote radiologist reviews imaging with AI pre-interpretation (flags critical findings, provides measurements) - Multilingual health information: AI translates and localizes health content for diverse populations

Example: WHO digital health interventions - AI chatbots provide health information, symptom checking, appointment scheduling in low-resource settings - Available in multiple languages, accessible via basic mobile phones


29.3 The Peril: How AI Could Worsen Inequities

29.3.1 Data Colonialism and Extraction

Problem: AI requires massive training datasets. LMIC clinical data is scarce, poorly digitized, or siloed within institutions. Meanwhile, tech companies from high-income countries seek LMIC data to train “global” models—but benefits accrue primarily to companies, not source populations.

Example scenarios: - Tech company partners with LMIC hospital, extracts patient data for AI training, commercializes model in high-income countries, no local benefit - Research institutions from wealthy countries conduct studies in LMICs, collect data, publish papers, provide no sustainable infrastructure or capacity-building

Ethical concerns: - Informed consent: Do patients understand data may be used for commercial AI development? - Benefit-sharing: How do source populations benefit from AI trained on their data? - Data sovereignty: Who controls LMIC health data?

Recommendations: - Require benefit-sharing agreements (e.g., AI models made available to source institutions at low/no cost) - Build local data infrastructure and governance capacity - Prioritize partnerships over extraction—co-develop AI with LMIC researchers, not just collect their data

29.3.2 Algorithmic Bias and Underrepresentation

Most medical AI is trained on data from high-income countries, predominantly white populations. Models may perform poorly on underrepresented groups.

Examples of bias impacting LMICs:

1. Imaging AI trained on Western populations - Skin cancer detection AI performs worse on darker skin tones (most training data from light-skinned patients) - Retinal imaging AI may fail in populations with different retinal anatomy (e.g., darker fundus pigmentation)

2. Clinical risk scores miscalibrated for diverse populations - Sepsis prediction models trained on U.S. ICUs may not generalize to African or Asian populations (different disease prevalence, comorbidities)

3. Language and cultural bias - NLP models trained on English medical text fail in languages with limited digital corpora (Swahili, Amharic, Urdu) - Symptom checkers assume Western disease presentation patterns, miss tropical diseases, endemic conditions

Consequences: - Widening health disparities: AI benefits populations already well-served, ignores underserved - Harm: Incorrect diagnoses, missed diseases in underrepresented populations - Wasted resources: Deploying ineffective AI in settings where it wasn’t validated

Solutions: - Diversify training datasets: Actively collect data from LMICs, diverse populations - Validate models in deployment populations before widespread use - Develop AI locally: Empower LMIC researchers to build models on local data

29.3.3 Infrastructure and Resource Constraints

AI deployment assumes infrastructure often absent in low-resource settings.

Common assumptions that fail in LMICs: - Reliable electricity: Required for computers, medical devices, data storage - Internet connectivity: Cloud-based AI requires bandwidth—spotty or unavailable in rural areas - Skilled IT workforce: Maintaining AI systems requires technical expertise - Device availability: Smartphones, computers, medical imaging equipment expensive - Regulatory frameworks: Many LMICs lack regulatory capacity for AI oversight

Example failures: - AI system deployed in rural clinic requires internet for cloud processing → unreliable connectivity = system unusable - Imaging AI requires high-resolution images → low-quality equipment in LMIC hospitals produces images incompatible with AI

Design principles for low-resource settings: - Offline functionality: Edge AI (on-device processing) doesn’t require internet - Low-power operation: Solar-powered devices, energy-efficient algorithms - Robustness: Perform reliably with lower-quality data, equipment variability - Simplicity: Minimal training required, intuitive interfaces - Affordability: Open-source software, low-cost hardware

29.3.4 Brain Drain and Capacity Challenges

AI development requires skilled workforce: computer scientists, data engineers, clinician-researchers. LMICs face brain drain—talented individuals migrate to high-income countries for training and jobs.

Cycle of inequality: 1. LMICs lack AI expertise → depend on foreign experts 2. LMIC students train abroad → few return (higher salaries, better research infrastructure elsewhere) 3. Local capacity doesn’t develop → continued dependence

Breaking the cycle: - Invest in local AI education programs (universities, training centers) - Create opportunities and competitive salaries for LMIC AI researchers - Foster research collaborations that build local capacity (not just extract data) - Support open-source AI tools and shared resources (lower barriers to entry)


29.4 Case Studies: AI in Global Health Settings

29.4.1 Success: Diabetic Retinopathy Screening in India

Background: Diabetes epidemic in India, insufficient ophthalmologists for screening.

Intervention: AI-based retinal screening (Google AI, Aravind Eye Care partnership) deployed in primary care clinics. Non-ophthalmologists capture retinal images, AI detects referable retinopathy, ophthalmologists review only flagged cases.

Outcomes: - Thousands screened in first year - AI sensitivity/specificity comparable to ophthalmologists (Gulshan et al. 2016) - Increased access to screening in underserved areas

Lessons: - Partnership with local institution (Aravind) ensured cultural fit, sustainability - Addressed real clinical need (workforce shortage) - Integration into existing workflows (primary care clinics)

Ongoing challenges: - Ensuring patients with positive screens receive follow-up care (diagnosis without treatment insufficient) - Sustaining program beyond pilot funding - Expanding to rural, remote areas with limited connectivity

29.4.2 Mixed Results: AI for Malaria Detection

Background: Malaria diagnosis requires microscopy—trained technicians scarce in malaria-endemic regions.

Intervention: Smartphone microscopy + AI to detect malaria parasites in blood smears.

Promise: - Lower cost than traditional microscopy infrastructure - Community health workers could perform diagnosis

Challenges: - Image quality varies significantly with smartphone camera, lighting, sample preparation - AI trained on high-quality lab images often fails on field-collected images - Limited prospective validation in real-world settings - Sustainability: Who maintains smartphones, provides technical support?

Current status: Proof-of-concept demonstrated, but limited deployment. More work needed to achieve field-ready, robust performance.

29.4.3 Cautionary Tale: Unvalidated AI Deployment

Scenario (composite of real incidents): International NGO deploys AI diagnostic tool in LMIC clinic without local validation.

What went wrong: - AI trained on Western population, performed poorly in local population (different disease prevalence, patient characteristics) - Clinic staff insufficiently trained, misinterpreted AI outputs - No infrastructure for software updates, technical support - Patients harmed by incorrect diagnoses - Community trust in healthcare damaged

Lessons: - Validation in deployment population is non-negotiable - Training and support infrastructure essential - Engage local stakeholders from design through deployment - Pilot carefully before scaling


29.5 Policy and Governance for Equitable AI

29.5.1 Principles for Equitable Global Health AI

1. Community Engagement and Co-Design - Involve end-users (patients, clinicians, community health workers) from project inception - Address locally prioritized health problems, not just problems interesting to external developers

2. Benefit-Sharing - AI trained on LMIC data should benefit source populations - Models, tools made available to LMIC institutions at low/no cost - Revenue-sharing for commercial applications

3. Capacity-Building - Invest in local AI research and education infrastructure - Collaborative partnerships, not extractive relationships - Train local workforce to develop, deploy, maintain AI systems

4. Open Science and Data Sharing - Open-source algorithms, shared datasets (with appropriate privacy protections) - Lower barriers to entry for LMIC researchers and institutions

5. Regulatory Harmonization - Support LMIC regulatory capacity for AI oversight - Avoid duplicative, burdensome processes (recognize WHO, regional body approvals)

6. Equity-Focused Funding - Prioritize funding for AI addressing LMIC health challenges (tropical diseases, maternal health, malnutrition) - Currently, funding flows toward high-income country priorities

29.5.2 Role of International Organizations

WHO (World Health Organization): - Developing guidelines for AI in health (ethics, governance, evaluation) - Convening global stakeholders to address equity - Supporting LMIC capacity-building

Multilateral Development Banks (World Bank, regional banks): - Financing digital health infrastructure in LMICs - Supporting national AI strategies

Academic and Research Networks: - Facilitating collaborative research - Training next generation of LMIC AI researchers


29.6 Telemedicine and AI: Expanding Access

Telemedicine has grown dramatically (accelerated by COVID-19 pandemic). AI can enhance telemedicine, particularly benefiting underserved populations.

29.6.1 AI Applications in Telemedicine

1. Triage and Symptom Assessment - AI chatbots assess symptoms, determine urgency (emergency vs. routine vs. self-care) - Available 24/7, multiple languages - Directs patients to appropriate level of care

2. Remote Diagnostics - AI interprets images, ECGs, lab results—extends specialist expertise remotely - Example: Rural clinic captures chest X-ray, AI flags pneumonia, remote physician reviews and prescribes treatment

3. Virtual Health Assistants - AI assists with medication reminders, chronic disease management, health education - Particularly valuable for patients with limited health literacy

4. Translation and Localization - AI-powered translation enables cross-language telemedicine consultations - Localizes health content to cultural context

29.6.2 Challenges for Telemedicine in Low-Resource Settings

  • Connectivity: Requires reliable internet, video bandwidth (often unavailable in rural LMICs)
  • Digital literacy: Patients and providers may lack familiarity with technology
  • Trust: Patients may prefer in-person care, distrust remote diagnosis
  • Integration: Telemedicine must integrate with in-person care, referral systems
  • Reimbursement: Payment models for telemedicine often unclear in LMICs

29.7 The Digital Divide: Access and Equity

AI benefits require access to technology. Yet digital divides persist—between and within countries.

29.7.1 Dimensions of Digital Divide

1. Infrastructure - Electricity, internet connectivity concentrated in urban, high-income areas - Rural, remote, low-income populations underserved

2. Devices - Smartphones, computers expensive relative to income in LMICs - Older, less capable devices may not run modern AI applications

3. Digital Literacy - Operating smartphones, navigating apps requires skills not universally held - Education disparities correlate with digital literacy

4. Language - Most AI developed in English, other major languages - Hundreds of languages underrepresented or absent

29.7.2 Bridging the Divide

Infrastructure investments: - Expand electricity grids, renewable energy (solar) for remote areas - Subsidize internet access, mobile data for health applications

Device affordability: - Low-cost smartphones, tablets designed for developing markets - Public-private partnerships to subsidize devices for health workers

Education and training: - Digital literacy programs for patients, community health workers - AI interfaces designed for low-literacy users (voice, images, symbols)

Language inclusion: - Develop NLP tools for underrepresented languages - Community-based translation and localization


29.8 Conclusion: Toward Equitable AI in Global Health

AI’s impact on global health equity is not predetermined—it’s a choice. Current trajectories favor high-income countries and populations, but intentional efforts can redirect toward equity.

What’s needed:

  1. Prioritize LMIC health challenges: Fund AI research addressing diseases, conditions disproportionately affecting LMICs (malaria, TB, maternal mortality, malnutrition)

  2. Diversify datasets: Actively collect, share data representing global diversity (geography, demographics, disease patterns)

  3. Design for low-resource settings: Build AI assuming limited infrastructure, prioritize offline functionality, robustness, affordability

  4. Capacity-building: Invest in LMIC AI education, research infrastructure, workforce development

  5. Equitable partnerships: Collaborate with LMIC institutions, share benefits, respect data sovereignty

  6. Regulatory support: Help LMICs develop AI oversight capacity, harmonize international standards

  7. Open science: Share algorithms, datasets, tools openly to lower barriers for LMIC researchers

The alternative: Without intentional equity focus, AI will widen global health disparities—benefiting the already well-served, ignoring the underserved. Physicians, researchers, policymakers, and technologists must advocate for equity-centered AI development. The goal is not AI for AI’s sake, but better health for all—especially those most in need.

Medicine’s ethical foundation demands addressing suffering wherever it occurs. AI is a tool—powerful, but morally neutral. Its impact depends on who builds it, for whom, and toward what ends. Let’s build AI that serves global health equity, not just global health markets.


29.9 References