Separating AI Hype from AI Reality: 3 Considerations for Healthcare Leaders

When ChatGPT shot to 100 million monthly active users in just two months, leaders across industries took notice—and so did digital health vendors. Generative artificial intelligence (GenAI) went from comic books and movies to real-world experimentation in record time.

Seemingly overnight, Epic piloted GenAI at large health systems via its EHR, while Boston Children’s Hospital planned to hire a “prompt engineer” to work with large language models. The White House hosted leaders from across the healthcare landscape to draft principles for safe use of AI in health.

But as AI innovation advances at a blistering pace, all healthcare leaders must ask themselves: “How should we put AI into play—and where do we begin?”

It’s an important question not just because AI in healthcare is still relatively new, but also because it’s challenging to differentiate the AI tools that are flashy from the ones that are most effective—the proverbial “sizzle” from the “steak.” While a garden spade, a shovel, and a hydraulic excavator all perform the same function—digging dirt—there are only two of these tools you would want to use in your garden.  Which is to say, AI – and GenAI in particular – are not yet out-of-the-box solutions; rather, they are purpose-built tools that can be leveraged to improve specific activities and workflows.

To make the right AI investment decisions, healthcare leaders must prioritize outcomes over methods to separate reality from hype.

Exploring AI Capabilities in Healthcare

Throughout my career, I’ve seen engineers leverage new technologies in meaningful ways countless times. From relational databases and cloud computing to manage the cost curve of operations at scale to the use of natural language processing—a type of AI—to review complex medical records for analysis in clinical trials, these technologies are changing healthcare for the better. When applied to use cases such as accelerating clinical research, innovations like these undergo significant scientific rigor to ensure they deliver their intended value—not an easy process. 

Foundationally, AI holds tremendous potential in healthcare, but it’s important to ensure GenAI is used in the right ways to achieve the right outcomes. Recently, I initiated five copies of the same publicly available GPT at the same time and asked each of them the same question to provide a list of codes that can be used to report a physical exam in the United States. I received five materially different answers, ranging from “I can’t help with that” to an exhaustive list of CPT, Level II HCPCS, and ICD-10 coding methods. While it’s a non-scientific test of a free-trial product, it is a powerful reminder that the inherent design of this tool is adaptive vs. prescriptive, adding a new level of complexity to productization and operationalization of the technology.  

Perhaps it’s no surprise, then, that one out of every four healthcare leaders believe the pace of AI adoption is occurring too rapidly.

That’s why it is critical that healthcare leaders approach AI investment with an eye toward right-now value. Here are three key considerations in determining which AI solutions to add to your organization’s toolbox.

Retaining patients is often as important as acquiring new ones. When patients have an exceptional journey, they are more likely to stay loyal to your practice. Relatient’s solutions not only help in scheduling and communication but also in managing patient feedback and reviews. By actively engaging with patients and addressing their concerns promptly, you can solidify the doctor-patient relationship and encourage long-term retention.

1. Look for low-risk areas to experiment with first.

While advancements in machine learning have been proven to enhance population health management, this type of AI innovation might not be the first tool a physician group chooses to adopt, particularly if a practice doesn’t have the manpower to support implementation. Instead, consider innovations that save time for patients or staff and help deliver better experiences without a heavy lift. For example, leveraging advanced analytics to segment populations based on demographic or health data can position an organization to target messages to specific patient groups, such as those with chronic conditions. From there, organizations can experiment with GenAI tools to craft effective calls to action in the format each patient prefers (e.g., text, email, phone). This personalized approach has been proven to reduce no-shows, speed patient throughput, and increase patient engagement, satisfaction and out-of-pocket collections while automating these processes simultaneously removes the burden from practice staff.

2. Know what the proposed AI tool can do—and what it can’t.

People ask all the time, “How come scheduling an appointment with your physician isn’t as easy as making a restaurant reservation?” The answer is, “There’s a lot more at stake.” A physician’s office needs to know whether a patient has been seen previously by the clinical team; whether the type of visit is the same as last time or involves a new concern; whether the patient’s health has changed since their last visit; and whether the primary method of payment (insurance) remains the same. It would be akin to a restaurant asking whether a guest intends to eat the same meal as last time, which chef cooked their meal and whether they’ve been diagnosed with any food allergies since their last visit. For these reasons and more, unrestrained GenAI is not currently equipped for complex patient scheduling scenarios. However, there are ways to materially improve the ease and efficiency of scheduling, enabling patients to self-schedule from their phone as easily as making a reservation at their favorite eatery.

3. Make sure any AI innovation your organization adopts has been tested at scale for use in healthcare.

One of the biggest areas of concern around AI innovation in healthcare is: “Where does liability live?” If ChatGPT offers an answer to a question and gets it wrong, who’s at fault? This is the focus of an ongoing industry argument. As a product engineer, it’s one thing to take responsibility for algorithms my team writes. It’s another to do so for AI algorithms that reside in a black box owned by another vendor that we’re unable to effectively examine. Even in instances where liability has been established, it’s important to verify that the application has been rigorously tested for use in healthcare and demonstrates quantifiable results without false positives or false negatives.

By emphasizing substance over shine, leaders can make informed AI investment decisions for long-term value. Understanding the organizational risk tolerance, clinical and operation priorities, change management demands and technical capacity to innovate are essential drivers on the road to AI.