Artificial Intelligence in Healthcare

AI is part of our everyday lives. When you use search engines, navigation apps, online translators, or digital assistants like Siri, you’re using AI. And when you interact with social media platforms, you are not only using an AI algorithm, but you’re also feeding it data that will—in theory—train it to curate content recommendations that match your interests. 

But social media platforms subsist on ad revenue, and advertisers want to see engagement among the user base to justify running paid ads. And content that stokes conflict and division is highly engaging, which led some platforms to prioritize such content in their algorithmic recommendations. If you’re enraged, you’re engaged.

Potential for abuse

This illustrates one of the core concerns with AI—these powerful technologies have a huge potential for abuse when deployed by organizations that place profit over respect for public safety. Now think about autonomous vehicles. Self-driving cars seem like a great idea because a computer can react far more quickly to a dangerous situation than even the best human driver. But how does the AI controlling the car make decisions? Will it prioritize the life of its occupants above all else? How are these calculations made? Who decides? 

Biased data leads to biased AI

Despite their powerful pattern recognition, AI systems don’t always get it right; a pedestrian was struck and killed by a self-driving Uber vehicle in Arizona, prompting the company to terminate its self-driving test program. The vehicle’s autonomous systems did not recognize the object it detected as a person until it was too late. The AI may have had difficulty recognizing a pedestrian pushing a bicycle because it was an unfamiliar object. 

This illustrates another key limitation of AI. The algorithm is only as good as the data you feed it. Autonomous vehicle AI systems must be trained to recognize a wide variety of road hazards, not just the most common ones. 

Data sets can also reflect the biases of their creators. Facial recognition AI deployed by Apple on the iPhone X was notoriously bad at identifying users with darker skin, because the training dataset for the algorithm included relatively few women and people of color. Use of facial recognition in law enforcement is even more problematic because of the potential for error and even outright abuse in algorithmic surveillance. 

What about medicine?

Medicine, like law enforcement, is a high-stakes context for AI. The potential limitations described above apply to use of AI in healthcare as well. But there are some key clinical areas where AI has made inroads, notably in radiology where AI algorithms use their incredible pattern recognition capabilities to identify features of interest that the human eye might miss. Such findings are reviewed by a radiologist, ensuring that the human factor is always part of the equation. 

Beyond this sort of use, AI has made limited inroads in clinical practice. Medicine is an inherently conservative space, in the sense that any new technology must be heavily vetted and tested before it’s adopted by HCPs. Most people don’t trust AI to make any sort of clinical decision—but healthcare leaders do see a role for AI in many non-clinical applications, like streamlining workflows and reducing administrative burdens on HCPs. Optum’s 4th annual report on AI in healthcare provides a great snapshot of the current state: most healthcare leaders surveyed are bullish on the potential of AI but want knowledgeable partners to help them implement AI solutions. The people surveyed were nearly unanimous in the opinion that it is their responsibility to ensure responsible use of AI in their hospitals and clinics. 

What do you think? Is there a larger place for AI in healthcare, especially in clinical applications? Would you ever trust a fully automated system to deliver care? Do you want R2D2 operating on your loved ones? Let us know!

Altay Akgun

Altay Akgun is our Associate Creative Director and resident bike nerd. He lives in the Philly suburbs with a bunch of kids and pets and bikes and things.
aakgun@m-health.com

Previous
Previous

Metaverse Hopes and Hazards

Next
Next

340B: Hospital lifeline or trickle-down healthcare?