Independent Research Initiative
Is AI Treating
All Patients Fairly?
Healthcare AI is used by millions. We test whether these systems treat everyone equally—regardless of name, race, or gender. Our methods are open. Our findings are verifiable.
The Problem
AI systems are increasingly used to guide healthcare decisions. But what if they've learned our biases?
What We Already Know
- Obermeyer et al. (2019) in Science: Healthcare algorithms showed racial bias, underestimating illness severity for Black patients
- Hoffman et al. (2016) in PNAS: Medical professionals exhibited racial bias in pain assessment
- Schulman et al. (1999) in NEJM: Identical cardiac cases received different referral rates based on race and gender
If humans show bias, and AI learns from human data, does AI perpetuate that bias?
Our Approach
Rigorous methodology. Open methods. Verifiable results.
Matched-Pair Testing
Submit identical symptoms to AI systems, varying only the patient name. If outputs differ, the name is the cause.
Statistical Rigor
Effect sizes (Cohen's d), significance testing with Bonferroni correction, pre-registered analysis plan.
Open Science
All methods, code, and data publicly available. Anyone can replicate our findings.
Responsible Disclosure
Share findings with AI developers before publication. Goal is improvement, not attack.
Research Status
Protocol Design
Pre-registered methodology based on peer-reviewed frameworks
Test Materials
50+ name pairs, 20+ symptom profiles developed
Data Collection
Testing consumer AI systems and LLMs
Analysis & Disclosure
Statistical analysis, responsible disclosure to developers
Publication
Public findings release
Don't Trust. Test. Verify.
Our methodology is designed so anyone can understand it, replicate it, or extend it. Healthcare AI fairness isn't our problem to solve alone—it's everyone's.