By Sarah Elkins for For the Record
Following a study which noted front-end speech recognition’s failure to increase physician satisfaction with their jobs, experts contemplate how to rectify the situation.
In May, KLAS Research published the findings of a 12-month evaluation of organizations with high adoption rates of front-end speech recognition tools. The study focused on three leading vendors: Dolbey, M*Modal, and Nuance.
The key finding from the report was that high adoption does not lead to increased physician fulfillment. According to the report authors, “Nearly all of these organizations say that speech tools have had an impact on overall physician satisfaction, yet when pressed for details, most point to satisfaction with the speech tools themselves and not specifically to impact on physician job fulfillment or burnout.”
While speech recognition is not the physician burnout panacea early predictors had hoped it would be, the report does point to a best practice for successful adoption. In short, effective implementation of front-end speech recognition is achieved through training, training, and more training. Across the board, customers cited robust and ongoing training programs as the secret to their high adoption rates.
“We train the product very well,” says one Dolbey customer. An M*Modal customer described their organization’s process for helping users move beyond basic use to mastery. A Nuance customer shared the different training environments they employ, from small group to one-on-one and even customized one-on-one training options.
Among the organizations surveyed for the KLAS report, there was a wide array of adoption methods. Some organizations took a more passive approach, making speech recognition tools available for physicians who wanted it while also allowing physicians to keep transcription services. Other organizations required mandatory adoption of front-end speech recognition and attributed their success to that hardline leadership.
KLAS evaluated four products for its 2018 findings: two cloud-based and two server-based solutions. M*Modal’s Fluency Direct and Nuance’s Dragon Medical One represented the cloud-based market, while Dolbey’s Fusion SpeechEMR and Nuance’s Dragon Medical NE represented the server-based market.
Additionally, the report sought to follow a line of inquiry begun in a 2014 KLAS report that raised the question of whether the development of cloud-based solutions would lead to wider adoption and more consistent outcomes for providers.
The 2018 report did not include mention of Dolbey’s cloud-based solution Fusion Narrate, which was introduced to the market at the 2018 HIMSS Annual Conference in March, two months prior to the KLAS publication. The question remains how Fusion Narrate will stack up against Fluency Direct and Dragon Medical One. Perhaps a future report will show Dolbey’s performance in key indicators, including “Would You Buy Again” and “Part of Long-Term Plans,” to be on par with their cloud-based compatriots. At the time of the report’s publication, Dolbey was trailing its cloud-based competitors in these two areas.
Similarly, Nuance’s cloud-based Dragon Medical One outperformed its server-based sister product Dragon Medical NE, according to customer feedback. Had the timing of Dolbey’s product launch and KLAS’s research aligned, it seems likely the same would be true when comparing Fusion SpeechEMR and Fusion Narrate. That, however, will have to wait for the next round of research.
Physician burnout and work dissatisfaction is a growing industry concern. Countless surveys have depicted a profession lamenting the state of the health care industry, citing excessive regulations and other noncare-related issues. Unfortunately, according to corporate leaders and industry professionals, it may be too tall an order to expect technology to tackle, let alone make a dent in, such a broad and subjective problem.
According to Tim Ruff, vice president of solutions management at M*Modal, “It’s a heavy burden to task technology with being solely responsible for improving physician satisfaction, but it can—and does—play a huge role.”
Ruff points to several promising technologies, including speech recognition, computer-assisted physician documentation, mobile applications, and virtual assistants, as potential game-changers.
For Punit Soni, CEO of Suki, a digital assistant for physicians that employs artificial intelligence (AI) and speech recognition, physician burnout will never be addressed with a narrow scope. “The solution isn’t technology,” he says. “The question is: What is the product you’re going to build?”
Like Soni, Ruff suggests the real impact on physician fulfillment will not be made by any one technology. “Cumulatively, these benefits are bound to significantly improve both physician and patient satisfaction,” Ruff says.
The irony in addressing physician burnout with improved technology is that the root cause of the problem is technology. Many physicians long for the simple days of patient care, before they were forced onto the EHR, before meaningful use, even before outsourced transcription. Slow adoption rates for new technologies may, in part, be grounded in physician mistrust.
According to an Annals of Internal Medicine publication, “Allocation of Physician Time in Ambulatory Practice: A Time and Motion Study in 4 Specialties,” for every hour physicians provide direct clinical face time to patients, nearly two additional hours are spent on EHR and desk work within the clinic day. Outside office hours, physicians spend another one to two hours of personal time each night doing additional computer and other clerical work.
Despite the prevailing sentiment toward technology, the hope is as tools such as speech recognition improve, satisfaction will follow suit.
“There’s this whole tide changing with physicians embracing technology. They can’t go back to a scribbled piece of paper. Even though it was easy and good for them, it wasn’t good for the patient,” says Pamela Gratzer, president of Mindware Connections. “Doctors were very spoiled for a long time. All they did was pick up a phone or pick up a recorder and talk. It could be garbled, they could be eating lunch, chewing, they could be doing other stuff and they would never proof the note. A lot of times it was just poor documentation.”
Tammy Seithel, director of business development at Dolbey & Company says, “I always relate it to if I had the choice of cleaning my own house or paying someone to clean it for me, I’d much rather have someone else clean it, as far as having transcription support.”
The hope is that technology that can run in the background can solve the satisfaction problems it created.
The refrain repeated throughout the KLAS Research report was that upfront and ongoing training was instrumental in the successful adoption of speech recognition solutions.
“We not only provide training for physicians, at the elbow if we need to, but we also train [provider staff] in how to support their physicians,” says Bob Leslie, senior vice president/general manager at Dolbey & Company. “If [providers] put together a good support group, they’ll be successful with speech recognition.”
Like Dolbey, M*Modal emphasizes training. “We take a high-touch approach that is customized to the needs of each individual physician,” says Paula Pasquinelli, vice president of adoption and implementation services at M*Modal. “Our adoption experts develop strong collaborative relationships with assigned doctors by supporting the go-live period, proactively following user progress, and monitoring for opportunities to optimize usage as well as documentation quality.”
Gratzer, whose company has been a reseller of Nuance since 2002, offers training and customer support for Dragon customers in the Northeast and beyond. She takes a broader look at the importance of training. With a corporate background, Gratzer trains her clients not just on the technology but also on basic time management.
“Putting some time into technology up front, in training or setting up the macros, doing your homework, and getting a good foundation saves you a ton in the long run,” she says. “It’s not even just speech recognition, it’s about how to be most efficient in the day.”
Organizational leadership also plays a role in the success of speech recognition adoption. The KLAS Research report reflected a wide spectrum of approaches, from “It’s here if you want to use it” to mandated adoption.
Gratzer says clients who have seen the greatest success with their speech recognition programs boast strong leadership with a clear directive. She explains that for the most successful clients, “It was a term of employment. You will be using this. You are giving up your digital recorder after you’ve been trained on Dragon.”
Gratzer says any organization that isn’t mandating speech recognition adoption hasn’t taken a good look at its bottom line. For her, the numbers are clear: A monthly transcription service for just one physician can cost thousands of dollars, while front-end speech recognition pays for itself almost immediately.
On the other hand, Seithel attributes adoption success to supportive leadership and the ability to see the whole picture. “The sites that we see good success with are the ones that have in place top-down support and they have a plan to transition,” she notes. “If you’re going to replace transcription, what other processes and functions do those transcriptionists do?”
Seithel says organizations can run into problems when an entire transcription department is eradicated without realizing the transcriptionists were doing much more than transcription. She warns that leadership must be aware of all the moving parts.
Leslie points to the importance of a holistic approach, beginning with leadership, noting that problems arise “when people are just looking at a product and not looking at a whole solution and how they implement the product. They think if they buy speech recognition, they give it to the docs, now they’re on to the next project. They don’t necessarily provide the support systems that they need for those physicians to handle that documentation.”
A Better Note
Proponents of speech recognition find that, beyond the comparative affordability and time savings of the technology, the most valuable byproduct is a better note. Front-end speech recognition captures the physician’s words at the point of care and enables the physician to edit errors immediately as opposed to days later when the record has been returned from transcription. Furthermore, proponents argue a spoken note is better than clicking a series of boxes within an EHR.
“You see that it’s a good note. It’s a patient note with a soul, not a point and click,” Gratzer says. “Every patient has a unique narrative; they have their own story. It has got to be captured.”
Leslie agrees that speech recognition can lead to a better note, but quality control is still a necessary ingredient. “When [Dolbey] started in this business, the competition was the shorthand secretary,” he says. “I can do it faster by dictating rather than having somebody dictate, then transcribe it, and then me approve it. In the long run, [speech recognition] lets the physicians be more complete in their dictations, but, again, you have to have some quality program in place to make sure that that happens.”
Seithel adds, “The notes are more robust over typing for sure, and you get away from all of those texting short cuts that folks use in the note that you don’t want.”
Ruff goes further to explain that the note is also more technically complete. “Deficiencies in documentation from a coding and clinical documentation improvement perspective are identified immediately so that the physician is no longer required to be interrupted at a later date to address inadequacies in the clinical note,” he says.
AI and What’s Next
The speech recognition industry is looking ahead. AI is already being deployed to improve outcomes, and each of the companies highlighted in the KLAS report are actively working on next-generation platforms.
For example, Dolbey anticipates a new product launch this year. “With the new release of our Fusion Narrate product, we’ve also done some work for computer-assisted physician documentation that we’ll be releasing later on this summer,” Leslie says. “It will tie into speech recognition. As a physician is dictating, it can prompt them for things they may have left out of the dictation or weren’t specific enough. We use some AI for that.”
Meanwhile, Nuance continues to add to its current product. “They’re invested in making this a platform. It’s constantly being improved; new words are being put in. You don’t have to touch anyone’s computer. It’s just done with an upload,” Gratzer says.
Disruptors like Soni at Suki are working to predict where the market will be in a decade. “[Speech recognition] technology is available to almost anyone. Anyone will be able to do that soon. The technology needs to understand what you meant and automatically generate all the things that need to happen downstream. The vision is to build a new health stack for doctors that is invisible. They won’t need to interact with it,” he says.
“It is a tremendously exciting time for speech recognition and related technologies in health care,” Ruff says. “At M*Modal, we are accelerating innovation in conversational AI to meet market need and drive our mission of creating time to care for doctors.”
Like Soni, Ruff envisions noninvasive technologies in health care’s future. “The goal is to make documentation a byproduct of the patient-physician encounter and not a separate, burdensome task for the doctor,” he says.
“The bar is really high,” Soni acknowledges. Indeed, billions of dollars and the brightest minds are barreling toward a near future driven by AI in the hope physicians can return to the human element of medicine.