Why AI in Healthcare Shouldn’t Come at the Expense of Low-Income Patients

2–4 minutes

Imagine walking into a doctor’s office, only to be greeted by a medical assistant who uses artificial intelligence to diagnose your ailments. Sounds like a futuristic dream, right? But in southern California, this is a harsh reality for many low-income patients. A private company, Akido Labs, is running clinics where patients are seen by medical assistants who use AI to listen to their conversations, then spit out potential diagnoses and treatment plans, which are reviewed by a doctor. The goal? To ‘pull the doctor out of the visit.’ But this is a recipe for disaster.

## The Risks of AI in Healthcare

The trend of AI in healthcare is gaining momentum, with two out of three physicians using AI to assist with their daily work, including diagnosing patients. But this trend has a deeper impact on people with low incomes who already face substantial barriers to care and higher rates of mistreatment in healthcare settings. People who are unhoused and have low incomes should not be testing grounds for AI in healthcare. Instead, their voices and priorities should drive if, how, and when AI is implemented in their care.

## The Dark Side of AI Bias

Studies show that AI-enabled tools generate inaccurate diagnoses, and a 2021 study in Nature Medicine found that AI algorithms trained on large, chest X-ray datasets systematically under-diagnosed Black and Latinx patients, patients recorded as female, and patients with Medicaid insurance. Another study, published in 2024, found that AI misdiagnosed breast cancer screenings among Black patients. This systematic bias risks deepening health inequities for patients already facing barriers to care.

## The Uninformed Patient

Some patients aren’t even informed that their health provider or healthcare system is using AI. A medical assistant told the MIT Technology Review that his patients know an AI system is listening, but he does not tell them that it makes diagnostic recommendations. This harkens back to an era of exploitative medical racism where Black people were experimented on without informed consent and often against their will.

## The Larger Impact of AI in Healthcare

TechTonic Justice, an advocacy group working to protect economically marginalized communities from the harms of AI, published a report that estimates 92 million Americans with low incomes have some basic aspect of their lives decided by AI. A real-life example of this is playing out in federal courts right now, where Medicare Advantage customers are suing UnitedHealthcare and Humana for denying coverage due to AI system errors. If you have financial resources, you can get quality healthcare. But if you are unhoused or have a low income, AI may bar you from even accessing the healthcare entirely. That’s medical classism. We should not experiment on patients who are unhoused or have low incomes for an AI rollout. The documented harms are greater than the potential, unproven benefits promised by start-ups and other tech ventures.

Given the barriers that people who are unhoused and have low incomes face, it is crucial they receive patient-centered care with a human healthcare provider who listens to their health-related needs and priorities. We cannot create a standard where we rely on a health system in which health practitioners take a backseat while AI – run by private companies – takes the lead. An AI system that ‘listens’ in and is developed without rigorous evaluation by the communities themselves disempowers patients by removing their decision-making authority to determine what technologies, including AI, are implemented in their health care.

Asset Management AI Betting AI Generative AI GPT Horse Racing Prediction AI Medical AI Perplexity Comet AI Semiconductor AI Sora AI Stable Diffusion UX UI Design AI