Are You Moving the CDI Needle?

Three questions to consider when evaluating a clinical documentation improvement program.

By Lisa A. Eramo, MA for For the Record

Review the record. Query the physician. Obtain the diagnosis. Repeat. Does this clinical documentation improvement (CDI) workflow sound familiar?

Productivity is the hallmark of a good program. Or is it? On the surface, CDI specialists take the proper steps, but do their actions ultimately translate into documentation that reflects the most accurate clinical picture?

Experts agree that establishing clear and consistent performance metrics is the only way to find out. Evaluating a CDI program not only helps justify return on investment but also pinpoints opportunities for education and process improvement and perhaps even justifies the need to hire additional CDI specialists.

Organizations should consider three questions when evaluating CDI effectiveness.

Are CDI Performance Metrics Being Monitored Consistently?

All organizations should measure the following seven program metrics from the onset of any CDI efforts and throughout the duration of the program:

Query rate/volume. Definition: Of the cases reviewed by CDI specialists, how many include a query?

Before measuring this metric, decide how the organization will calculate it, says Fran Jurcak, MSN, RN, CCDS, vice president of clinical innovation at Iodine Software. For example, will it count the total number of cases that include a query, or will it count the total number of queries per case? Counting individual queries more accurately indicates how much time CDI specialists spend reviewing each case, Jurcak says.

Remember that it’s not only about the number of queries, says Glenn Krauss, RHIA, BBA, CCS, CCS-P, a senior consultant at Federal Advisory Partners, who notes that organizations must also examine the clinical validity of those queries. Do the queries actually improve the quality of the documentation so that it most accurately reflects patient severity? Krauss defines quality documentation as having the following four attributes:

  • valid chief complaint;
  • physical exam and assessment, both of which are congruent with the history of present illness;
  • definitive or provisional diagnosis with appropriate specificity; and
  • plan of care that’s congruent with the assessment.

“To me, that’s CDI—progress notes that tell the progress of the patient,” Krauss says.

If organizations intend to use the query rate/volume as a barometer of performance, they must ensure that all CDI specialists know when a query is appropriate, Jurcak says. “People are very subjectively making decisions about what they’re going to query as opposed to saying, ‘I want documentation integrity across the board regardless of how many queries are needed on an individual case,'” she says.

When CDI specialists don’t pose queries consistently, it becomes difficult to rely on the query rate/volume as a metric for effectiveness, Jurcak says. Posing queries inconsistently also sends mixed messages to physicians. For example, a CDI specialist decides not to query for heart failure because he or she has already reached the maximum severity of illness and risk of mortality. “You send a message to the provider that you query for heart failure only when it matters [for reimbursement], and then you wonder why providers don’t comply,” Jurcak says.

Review rate. Definition: Of the total number of cases, how many did CDI specialists review?

Don’t be fooled into thinking that a high review rate indicates an effective CDI program, Jurcak says. “You have to know that your staff are reviewing records, but you don’t want them spinning their wheels on cases that don’t benefit from CDI,” she explains. “It’s about reviewing the right records every day.”

Leveraging technology with artificial intelligence to prioritize cases can help matters, says Jurcak, who suggests manually eliminating cases from the workflow that typically have limited documentation opportunities (eg, elective joint replacement surgeries with a length of stay of fewer than three days, first-day admissions for which a physician hasn’t yet documented a history and physical).

Jurcak, who was a nurse before moving into CDI, views performance metrics differently now that she works for a technology company. Organizations shouldn’t strive to review 100% of their cases because many won’t benefit from CDI, she explains. In fact, organizations leveraging technology to prioritize cases for CDI may ultimately witness a decrease in their review rate but an increase in their query volume, Jurcak says.

Response rate. Definition: Of the cases queried, how many solicited a physician response?

Physician responses to queries are critical; however, Jurcak says organizations must take a closer look at the type of responses they receive. A high response rate doesn’t mean an organization is necessarily improving documentation quality.

For example, many programs use query templates that automatically provide the option of “unable to determine” or “other.” When physicians check one of these nonspecific boxes, they’ve technically responded, but they may not have provided additional information to improve the quality of care or documentation specificity. Rather than default to these options on every template, consider including them only when necessary, Jurcak says.

For example, when a CDI specialist poses a compliant query with appropriate clinical evidence, it may not make sense to provide an answer of “unable to determine.” Jurcak says providing this option allows an easy out for physicians who don’t understand the query or who aren’t willing to take the time to accurately document the conditions being monitored and treated.

Krauss cautions organizations using this metric to consider the following question: Even when a query yields a codable diagnosis, does the documentation enable the organization to defend that diagnosis in the event of an audit? “If the clinical information, facts of the case, and context surrounding the diagnosis do not paint a picture of acuity in support of the diagnosis, the fact that the diagnosis is charted by the physician as a direct result of a query serves very little, if any, purpose,” he says. “The outside reviewers will simply refute the diagnosis and remove from the claim, thereby downcoding the diagnosis-related group (DRG).”

Agreement rate. Definition: Up for debate.

Experts say there is no consensus within the industry on how to define this metric. Some organizations say agreement occurs when a physician provides a codable diagnosis rather than stating the clinical indicators aren’t relevant. Others say agreement occurs when a physician provides the anticipated or assumed diagnosis. A third interpretation is that agreement occurs when the physician agrees with the query—even when appropriate documentation is absent from the medical record.

The absence of a uniform definition makes nationwide program comparisons nearly impossible, Jurcak says. This metric is meaningful only when organizations take the time to formally define it—and then train staff on how to report data consistently, she says.

Complication or comorbidity (CC) or major complication or comorbidity (MCC) capture rate. Definition: Of the cases queried, how many yielded a CC and/or MCC?

Theoretically, the CC/MCC capture rate should increase as CDI efforts are initiated, says Amber Sterling, RN, BSN, CCDS, director of CDI services at TrustHCS. However, organizations shouldn’t assume that a low CC/MCC capture rate equates to ineffective CDI. In some cases, CCs and MCCs may simply be absent in the population.

Case-mix index (CMI). Definition: What is the average relative weight of all DRGs reported during a defined period of time?

In theory, the CMI should increase as CDI specialists capture additional CCs and MCCs. However, there are other factors that can influence CMI, such as the volume of surgical patients, removal of a service line, and the seasonality of certain diagnoses—none of which CDI specialists can impact using queries, Sterling says.

Financial impact. Definition: How does the working DRG compare with the final-coded DRG?

The challenge with this metric is that staff assign impact inconsistently, Sterling says. “It seems relatively simple, but there are a lot of gray areas,” she says. “You see a lot of variance in CDI staff practice. It takes diligence by the program managers to continually educate and audit their team.”

For example, will the organization count the financial impact anytime a CDI review yields a CC or only when the review yields a CC that’s the only CC on the case (thus shifting the DRG)?

“If you’re saying there’s a dollar impact on this case, the case must meet your standards for how you’re reporting impact,” Sterling says. “If you report $20 million of impact, but then you find out later there was an error on how things were reconciled and it was actually $12 million, your C-suite is not going to appreciate that. I’ve seen it happen. It can be significant.”

Have Metrics Evolved as CDI Priorities Change?

In the past, CDI programs were focused on queries that directly increased reimbursement. Now, some programs have expanded that scope to include queries that impact quality and risk adjustment—two big factors in value-based purchasing. But have program performance metrics evolved to reflect these new goals?

Not quite, Jurcak says. “We talk about the increased scope of practice, but then we still hold our employees accountable to the same metrics we used 10 years ago,” she says.

Krauss agrees. “The problem with programs today is that they’re based on invalid and unreliable measures of CDI. The fact that you touched a record, left a query, and received a documented clinical condition in the chart doesn’t necessarily mean you have improved documentation,” he says. “Solidifying a diagnosis in and of itself does not constitute CDI. What really matters is the quality and completeness of documentation that best communicates the patient care.”

Why haven’t programs moved beyond the basic key performance indicators? Krauss says it all goes back to revenue. “Can you sell a program to a CFO based on the quality of the documentation? You can’t,” he says.

Hospital CFOs need to understand the long-term effects of documentation, says Tiffany McCarthy, RHIT, manager of HIM solutions at GeBBS Healthcare Solutions. Even if organizations gain revenue in the short term, what happens when documentation indicates that outcomes are consistently poor? Insurance base rates could decrease, causing a loss of millions of dollars the following year, McCarthy says.

It’s irresponsible for organizations to continue to rely on traditional key performance indicators during the shift to value-based reimbursement, Krauss says. Episodic reimbursement necessitates the need to improve documentation quality across the board—not simply to shift a single DRG, he says.

Organizations also must contend with publicly available quality ratings. When consumers see that an organization has poor outcomes, they may seek care elsewhere, McCarthy says. “We’re in the age of information, and more and more people are using this information when they seek care,” she says.

How should today’s CDI programs define success? Krauss provides the following metrics:

  • low rate of hospital-acquired conditions;
  • low rate of patient safety indicators; and
  • decreased medical necessity denials (ie, denials due to insufficient documentation).

Organizations with quality-driven CDI programs also can measure success through a lower conversion rate from observation to inpatient status, Krauss says. That’s because these programs drive quality documentation from the onset of the patient encounter, helping to establish a reasonable expectation that the patient will stay at least two midnights.

Do Analyses Take a Deeper Dive Into the Data?

Drilling down into the data provides organizations with the insights they need to drive process improvement. This includes burrowing into data by facility, specialty group, individual physician, and coder, says Sterling, who recommends examining CC/MCC capture rates by payer.

“You don’t want to have your staff spend their time on things that are not going to provide results,” Sterling says. Identify where CDI specialists have the most impact and then streamline CDI efforts accordingly, she adds.

FIVE EVALUATION MISTAKES TO AVOID

According to HIM industry experts, health care organizations must be on the lookout for the following pitfalls when evaluating their clinical documentation improvement (CDI) programs.

Failure to Look Beyond the Data

Data tell a story, but do they tell the entire story? Not necessarily, which is why organizations need to examine anecdotal data as well, says Tiffany McCarthy, RHIT, manager of HIM solutions at GeBBS Healthcare Solutions. For example, ask physicians, CDI specialists, and coders to evaluate whether everyone works collaboratively. If not, what are the challenges and hurdles?

Fran Jurcak, MSN, RN, CCDS, vice president of clinical innovation at Iodine Software, agrees that interaction is key. “There’s still that level of engagement at the physician level that needs to occur. There still needs to be communication between CDI specialists and coders,” she says. “Helping each other to understand the clinical and coding issues related to appropriate documentation in a medical record is key to success for the organization.”

Assuming Increased Revenue Equates to Success

Although increased revenue may be the goal of a CDI program, organizations should consider how they compare with other facilities, says Amber Sterling, RN, BSN, CCDS, director of CDI services at TrustHCS. What do PEPPER and MedPAR data reveal? How does the organization compare with state, regional, and national averages? Does the analysis indicate any data anomalies that could raise a red flag with auditors?

Failure to Formalize Metrics in a Policy

Jurcak says a program policy should include the following:

  • specific metrics;
  • the definition of each metric;
  • clarification of the relationship between these metrics and the CDI program’s mission; and
  • bench-marking sources (eg, professional associations, MedPAR data, and internal averages).

Creating a policy ensures all CDI staff report data consistently. It also helps organizations compare themselves with other facilities, Sterling says. For example, if the organization calculates query volume as the number of queries per case, it doesn’t make sense to compare this number with a facility that uses the number of cases queried.

Misaligning Metrics and Mission
The CDI mission refers to the program’s overarching goal. For example, is it to increase reimbursement? Improve quality? Both?

Once defined, the mission can help organizations choose appropriate metrics to measure performance. Financial impact, for example, becomes less important when the mission is to improve quality, Jurcak says. “If you truly understand the mission, you can better hold your staff accountable,” she notes.

Reviewing Metrics Too Frequently — or Infrequently
While daily reviews don’t provide insight into larger trends, Jurcak says analyzing metrics monthly allows for quick adjustments.

However, don’t expect immediate improvements after an intervention. “Instant change isn’t going to necessarily result in instant benefits or improvements in the metrics next month. You have to watch it over time,” Jurcak says.

— Lisa A. Eramo, MA, is a freelance writer and editor in Cranston, Rhode Island, who specializes in HIM, medical coding, and health care regulatory topics.

Share Article:
Dolbey Systems, Inc.