I recently read a press release from a major language service provider announcing the rollout of their AI interpreter tool to thousands of healthcare sites across the country. While the company frames this development as a way to reduce costs and improve access to language services, the message raises more concern than optimism. The announcement celebrates technology as a solution to a pressing healthcare need, but it fails to seriously address the deeper risks of substituting artificial intelligence for human interpreters in clinical care. As a medical interpreter, I find this framing both short-sighted and potentially harmful, not only for the interpreting profession but for the very patients’ healthcare systems aim to serve.
Yes, healthcare must seek cost-effective innovations. But we must ask: are we solving the right problem? And more importantly, are we introducing greater risks under the guise of efficiency?
The fundamental issue lies in recognizing that not all language interactions in healthcare carry the same weight. AI may perform reasonably well in low-risk, structured conversations such as appointment reminders, check-in procedures, or simple billing inquiries. These tasks follow predictable patterns and typically do not carry high emotional or clinical stakes. But once the interaction shifts to discussing symptoms, diagnoses, treatment options, or consent forms, the need for human interpretation becomes non-negotiable. Medical conversations are rarely straightforward. They are emotionally charged, culturally complex, and often require clarification, empathy, and adaptability. These are traits that AI, at this point, simply cannot replicate.
The language provider’s press release boasts that AI interpretation could help save as much as $360 billion annually, estimating costs of $279 per LEP (Limited English Proficient) patient per year. While those figures may impress budget analysts, they obscure a more critical truth: interpreter services represent only a tiny fraction of healthcare spending, often estimated to be about 0.5 percent of the cost of a patient visit. In contrast, nearly 30 percent of U.S. healthcare costs stem from administrative overhead. Drug prices are two to three times higher than in other developed nations. Hospitals pay inflated rates for medical devices due to a lack of pricing transparency and competition. Billions are wasted on unnecessary procedures, often ordered out of legal defensiveness, and hospital mergers routinely drive up prices without improving care. Some health insurance CEOs make over $20 million a year, sums that far exceed the annual costs of all interpreter services nationwide. So, if we are serious about reducing healthcare costs, interpreters should not even be in the conversation.
Ironically, the language provider’s research acknowledges that their AI system performs poorly under real clinical conditions, when multiple people are speaking, when ambiguity is present, or when conversations involve sensitive emotional or cultural nuance. These are not edge cases. They are everyday realities in healthcare. When AI misinterprets a patient’s symptoms, mistranslates a consent form, or fails to detect distress, the consequences are not theoretical. They are potentially fatal.
Even the argument that AI helps solve the interpreter shortage deserves scrutiny. What is often described as a shortage is more accurately a systemwide failure to properly value the work of interpreters. Many hospitals underpay interpreters, sometimes offering wages comparable to entry level administrative jobs, and then wonder why they struggle with recruitment and retention. Rather than treating AI as a substitute for human labor, healthcare systems should invest in competitive pay, professional development, and remote tele-interpretation infrastructure that makes human interpreters more accessible across geographic areas.
AI, as it currently exists, does not fix the shortage. It hides it. It creates the illusion that a critical gap has been filled when, in reality, it allows hospitals to sidestep the hard but necessary work of building sustainable professional interpreter programs. Worse, it reinforces the false narrative that interpreters are a costly barrier to efficiency, when in truth, we are a safety net that prevents costly mistakes. Our presence helps reduce medical errors, prevent avoidable readmissions, avoid legal violations, and build trust between patients and providers. Cutting interpreter services may save money on paper, but it opens the door to a range of expensive problems down the line: malpractice claims, lower patient satisfaction, and increased health disparities.
Yes, there is a place for AI in healthcare. A more thoughtful hybrid model could use AI to handle non-clinical tasks while reserving human interpreters for sensitive and high stakes conversations. Picture a system where a patient is greeted and registered through AI, then seamlessly transitioned to a live interpreter for a medical consultation. This model could improve access and efficiency without compromising care quality. But for such a system to work, it requires clearly defined boundaries, ethical oversight, and transparency.
Patients also deserve to know when they are speaking to a machine versus a human. This is not a trivial distinction. It affects trust, understanding, and their ability to give truly informed consent, especially in communities already marginalized by language barriers. Healthcare systems must also consider the privacy implications of using AI, especially when these systems are handling sensitive personal health information. Are they compliant with data protection laws? Do they inform patients about data usage? These are not footnotes. They are central to ethical care.
What is most disheartening in this current wave of AI enthusiasm is the reduction of interpreters to a cost center rather than recognizing us as trained professionals who ensure safe, equitable, and compassionate care. Our work involves far more than linguistic conversion. We interpret meaning. We mediate culture. We advocate for clarity. We help build understanding during some of the most vulnerable moments in people’s lives. These are not tasks that machines, no matter how advanced, can do reliably, let alone with empathy.
We are not afraid of AI. In fact, many of us welcome tools that enhance our work, such as glossaries, transcription support, and administrative automation. But we strongly reject the notion that we are replaceable in the moments that matter most. Technology should enhance human care, not erode it.
The conversation about language access in healthcare should not be about replacement. It should be about collaboration. It should not be driven by cost reduction at the expense of patient safety. If healthcare leaders want to improve both access and affordability, they must stop looking at interpreters as the problem and start addressing the real causes of healthcare inflation. That means reducing administrative waste, reining in drug and device costs, ending exploitative insurance practices, and investing in the professionals, interpreters included, who deliver the care.
In the end, patients do not just need words translated. They need to be understood. They need to feel safe, respected, and heard. That human connection is what defines quality healthcare. And that is something no algorithm can replace.
Disclaimer: The views and opinions expressed in these blog entries are solely those of the author and do not necessarily reflect the official policy or position of the company. Any content provided by the author is of their opinion and is not intended to malign any religion, ethnic group, club, organization, company, individual, or anyone or anything.