Telehealth Costs: RAND's Questionable Rant

Telehealth Costs: RAND's Questionable Rant
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Introduction

Treatment for non-emergency illnesses via anytime access to telehealth services such as Teladoc, American Well, MDLive, Doctor on Demand and First Stop Health is growing rapidly. A core value proposition of these services is that lower cost virtual doctor visits redirect care away from higher cost options such as the emergency room (the E.R.), urgent care centers and doctor offices. This value proposition reflects the findings of studies by Harvard Medical School affiliated researchers, as well as surveys conducted by telehealth providers at the time patients access care.

Indeed, a series of studies undertaken for Teladoc by Veracity Analytics and Dr. Niteesh Choudhry, a professor and researcher at Harvard Medical School, concluded that, on average, each telehealth visit results in cost savings ranging from $518 to $717. These same analyses consistently find that the cost savings associated with patients initially treated by Teladoc increase substantially in the month following the patient’s initial treatment. These higher savings, due to a 30 day “incident-of-care” analysis, reflect analyses showing that patients with the same illness who use telehealth seek care sooner, as compared to patients who go elsewhere, and are therefore less sick and require less follow-up care.

Recently, Health Affairs released a study of telehealth usage and costs which challenged these conclusions. The study was undertaken by two policy researchers at the RAND Corporation, an associate professor at the Harvard Medical School (who was formerly affiliated with RAND as well), and a research scientist at CaLPERS. They concluded that 88% of telehealth usage reflected new (and seemingly unnecessary) utilization of healthcare services, as opposed to the redirection of care; and that telehealth users increased spending by $45 per person for the treatments they involved. The researchers hypothesized that the convenience of telehealth may lead to higher costs and apparently unnecessary usage concluding, “There may be a dose response with respect to convenience and utilization: the more convenient the location, the lower the threshold for seeking care and the greater the utilization may be.”

This RAND Corporation study, which questioned long-held assumptions about the cost savings associated with telehealth was widely reported, with articles in publications such as Medscape, MedCityNews, Becker’s Hospital Review, MobiHealthNews, Wired (online), and The Boston Globe (online).

However, a careful examination of this RAND study raises serious questions about the legitimacy of its conclusions. Several aspects of the study design are inconsistent with an earlier RAND study, published in Health Affairs in early 2014, by two of these same researchers and these differences appear inexplicable. In fact, as detailed below, there appear to be so many serious flaws in this study that two questions inevitably arise: Did the researchers bias the study design in order to provide a predetermined negative outcome? And, do the findings of the study have any validity?

Background and Full Disclosure

Several months ago I became actively involved in the telehealth industry. I am the founder of TelehealthWorks.net, which focuses on the value of telehealth programs for employer of all sizes. In addition, I have previously written about the potential for telehealth services to cut healthcare costs for employers and employees, where I describe telehealth as a “win-win” solution for employee benefit managers: A benefit that’s good for employees (by leading to better health outcomes and lower employee expenses) and good for employers (by significantly cutting annual healthcare expenses and increasing employee health, happiness and productivity).

So, I am far from an unbiased analyst. At the same time, when I first became involved in telehealth, I carefully reviewed all of the publicly available literature and continue to do. It’s this high familiarity with other analyses that enables me to provide the perspective of this article. Finally, I have not discussed this critique of the RAND study with anyone at any of the large telehealth providers or the researchers who undertook the study.

An Important Caveat

This article addresses the validity of the new RAND analysis and the value of this RAND study for today’s policy-makers and employers. However, the study implicitly raises a larger issue: It presumes the employee group studied was receiving the appropriate care prior to the introduction of telehealth, with employee health outcomes, costs for employers and employees, and employee satisfaction appropriately balanced. This issue is not addressed in this article. But, it is highly relevant to the larger discussion of how telehealth fits into our evolving national healthcare delivery system.

I.) Inconsistencies with RAND’s 2014 Telehealth Study

In 2014, two of the researchers who prepared this recent study published in Health Affairs an earlier study, associated with the use of telehealth for much of the same time period (2012 and 2013) by the same employee group (CaLPERS). The design of the recent, new study differs from this earlier analysis in several troubling, and unexplained, ways:

1.) A small non-random sample taken from an already small sample: This new RAND study is based on a small, non-random group of Teladoc users. Earlier, these same researchers studied the larger universe of all Teladoc users over most of the same period. Why did the researchers ignore the larger, more representative group of users, whom they had previously studied?

The recent study relied on data associated with the experience of the California Public Employees Retirement System (CaLPERS), which began offering Teladoc to members in 2012. Specifically, the study compared the behavior of employees suffering from respiratory illnesses, between April 2012 and November 2013, who consulted Teladoc versus employees with similar characteristics who did not access medical treatment via telehealth. The total number of Teladoc users, who served as the basis for this study was 981. During the 2011 to 2013 period, CaLPERS had approximately 300,000 enrollees.

While the small sample size may be cause for concern, the non-random nature of this group is disturbing. “We focused on visits for acute respiratory infections because they are the most common conditions for which patients seek direct-to-consumer telehealth treatment.” In the earlier 2014 study, published in Health Affairs, two of these same RAND affiliated analysts studied the behavior of 2,718 Teladoc users, from this same CaLPERS group, who sought treatment between April 2012 and February 2013, stating that “We obtained the complete medical claims of 2,718 Teladoc users, as well as the medical claims of a random sample 72,191 nonusers of Teladoc from the 306,027 eligible enrollees with Teladoc coverage.” In short, these same researchers disregarded a larger, more representative group of Teladoc users which they had previously studied, and gathered the necessary data for comparable non-telehealth users.

2.) A New, Far Shorter Definition of an Incident of Care: The study appears to have been designed to minimize the cost advantages of telehealth services, with the researchers adopting (without explanation) different standards of measurement than those applied in their 2014 analysis.

As detailed above, a central aspect in assessing the value and cost of telehealth is the cost of care required after initial treatment. To review, multiple studies conducted for Teladoc have found that telehealth patients are far less costly to treat, over time, because of fewer subsequent doctor visits, lower hospital readmission rates, and fewer visits to the E.R. These findings indicate that the convenience of telehealth leads individuals to seek treatment sooner, when they are less sick. Both major and minor illnesses are caught sooner, and the necessary treatments are provided with less cost. Indeed, how many of us have said to a friend or relative: “I am not sick enough to see a doctor yet.” In many cases, this wait may mean we are far sicker when we do seek care, and may also mean we incur significantly higher treatment costs.

The critical factor in measuring this cost of subsequent care is the time period associated with an “incident of care.” This is the time period measured by the number of days which passes after the first day of treatment. The cost of care is measured throughout this time period and associated with the full cost of an “incident of care.” In the 2014 study, the RAND researchers who performed the current study adopted a 21 day period for this “clinical resolution” analysis. Separately, studies performed by independent researchers for Teladoc have adopted a 30 day incident of care analysis.

In sharp contrast, the recently released RAND study relied on a far shorter two-day “episode of care” analysis. Hence, if an individual sought treatment for the same illness three days after the initial treatment, this care was presumably considered an entirely new doctor visit. No explanation for this shorter incident of care analysis is provided in the study.

The use of just two days for an incident of care cost calculation is highly significant: First, it means that, in all likelihood, the RAND study dramatically understated the cost savings associated with telehealth visits. In using the two-day cut-off, the current study did, in fact, find that it was less expensive to treat Teladoc patients than to treat patients who received initial care at the E.R., urgent care centers, or doctor offices, stating “Per episode, telehealth visits were about 50 percent of the cost of a physician office visit and less than 5 percent of the cost of an ED visit.”

However, as noted above, other research has shown that the full cost advantages of telehealth increases substantially throughout the incident of care period, as the cost of care for non-telehealth patients are significantly higher in the days and weeks following the first day of treatment. It seems likely that if the 21 day incident of care cut-off had been applied to this study, as two of these same RAND affiliated analysts did in their 2014 study, then the findings of the study would have reversed. Although I do not have access to the data, it appears likely that if the study had used the longer (seemingly generally accepted) incident of care analysis, the study would have found that telehealth led to overall cost savings as opposed to cost increases.

Second, it is possible that a 21 day or 30 day incident of care analysis would have shifted the other primary finding of the study: The extent telehealth leads to new usage as opposed to redirecting care. With a longer incident of care analysis, a portion of the doctor visits counted as “new” for both Teladoc and non-Teladoc users, would have been considered part of the same episode of illness. The number of “new” visits appearing in the total analysis would have dropped. How this change would affect the RAND’s findings, related to new usage versus substitution, is unknown. But, this seemingly arbitrary change in the defining standard of measurement raises a reasonable question as to the validity of these substitution findings.

II. Other Significant Concerns

1.) The cost of a 77 minute waiting period is attributed to each telehealth visit? The current study attached a 77 minute waiting period cost, valued at $30 in lost productivity, to patients receiving care via telehealth. This assumption defies logic, and suggests a strong negative bias among the researchers.

The study attempted to measure the full relative costs of seeking treatment via telehealth as compared to alternatives. The researchers concluded that, on average, the travel time associated with physical visits for care was 37 minutes (valued at $13 in lost productivity), 77 minutes of waiting time (valued at $30) and 20 minutes for a doctor’s visit (valued at $7.)

However, in comparing telehealth versus alternatives, the researchers stated that “Since we did not have time estimates directly related to telehealth, we assumed that the time spent on a telehealth visit was the same as the clinic time for the average office visit, so that the only time saved was travel time.” In short, each telehealth visit was assessed the lost productivity costs associated with an 84 minute visit. Of course, a central benefit of telehealth is the absence of waiting time at the care facility, since employees can continue their work until the doctor visit. And it’s hard to imagine that the average time spent with a doctor on a telehealth visit is substantially longer than the 20 minute physical visit estimate used here, much less 84 minutes on average.

The use of this assumption has two implications. First, of least significance, the extra $30 lost productivity cost associated with the waiting time and possibly the $7 lost productivity cost of the doctor visit (since many telehealth visits take place outside of working hours) should not be included in the study’s analysis of costs associated with telehealth care. Moreover, for an unexplained reason, the researchers do not provide the underlying analyses showing their financial calculations, so the full impact of shifting this assumption is unknown.

Second, of greater concern is the irrationality of this telehealth waiting room cost assumption. It appears to be such an extreme example of attaching inappropriate costs to telehealth visits that one is forced to question the impartiality of the study’s authors. Would anyone who is free of bias make such an assumption? And one must wonder whether other, equally inappropriate assumptions, are hidden in the study’s analyses?

2.) Potential misinterpretation of results and individual choices for care: The study fails to distinguish between the extra costs voluntarily assumed by telehealth patients, and what appear to be net savings (as compared to higher costs) realized by CaLPERS.

In the 2014 study, the RAND analysts indicate that the charge for a Teladoc visit was $38. The current study makes no mention of this patient fee and (as best I can determine) the current study is measuring the total cost of care for telehealth patients versus those receiving care elsewhere.

While not stated directly, the implication of the study as a whole, is that the introduction of telehealth increased the cost of care to CaLPERS. The study finds that telehealth users increased costs by an average of $45.

When the $38 per visit fee is combined with the earlier discussion of adopting a longer incident of care period, the elimination of the 80 minute telehealth waiting room cost, and even minimal the substitution identified by the RAND, my best reconstruction of the analysts model indicates that telehealth users, in fact, create significantly lower expenses for CaLPERS than other care recipients.

I am not suggesting it’s good for anyone if, as the study suggests, telehealth patients receive unnecessary treatment. However, it is certainly worth acknowledging, as the study does not, the distinction between employee costs of care and employer costs. If costs are higher, this increased expense accrues to patients who made the decision that their symptoms merited paying the fees associated with this care. Is such preventive care, funded by the patient, a bad thing?

3.) The telehealth industry, and access to healthcare in general, are very different today. The RAND study examined the behavior of just 961 people, from a pool of 300,000 potential users, at a time when the telehealth industry was in its infancy and telehealth was, perhaps, a novelty. It’s difficult to imagine this small group of early adopters, even if they represented a random sample, reflect the likely behavior of today’s telehealth patients. Since these early years, there have been millions of new telehealth visits with high user satisfaction, telehealth has attracted far greater consumer interest as a mainstream source for receiving care, and the basic fabric of the U.S. health system has shifted dramatically. Do the RAND researchers really believe important policy and industry decisions should now be based on the behavior of this small group, at such an early time in the development of this new form of care, particularly when our overall healthcare system was also very different?

4.) Unexplained differences in the findings of similar studies: There is, of course, no requirement that the RAND analysts explain why the findings of their study vary so dramatically from other analyses. However, the value and legitimacy, of their findings would be greatly enhanced if they had provided such an explanation.

In a March 3, 2017 blog post, Teladoc’s Chief Medical Officer, noted that the company has worked with Veracity Analytics and Dr. Niteesh Choudhry, a professor and researcher at Harvard Medical School, on five analyses, stating “Collectively, the analyses were based on claims data spanning January 2011 through May 2016. These studies examine four different national U.S. populations covering 1.8 million beneficiaries and 22,000 Teladoc visits.”

Unlike RAND, Veracity Analytics found that each telehealth visit generates several hundred dollars in net savings, as compared to the average cost and likely usage of treatment for comparable illnesses.

RAND also concluded that only 12% of telehealth visits replaced higher cost alternatives. The remaining 88% of visits were, according to RAND, additive (and seemingly unnecessary). Teladoc asks each patient, at the time they receive care, what they would have done if telehealth had not been available. As detailed in MobiHealthNews:

Their [Teladoc’s] data -- even for that particular cohort -- is almost exactly the reverse of the Health Affairs study: 87 percent said they would have gone somewhere else to get care, while only 13 percent was additional utilization. The RAND study dismisses that survey data as subject to a number of psychological biases, but Teladoc's approach of trusting the patients themselves over statistical modeling certainly has an appeal.

It’s difficult to imagine, as RAND concludes, that fully 75% of all users misrepresented their decision-making process. (The 75% is derived by taking 100% and subtracting the 12% found by RAND and the 13% reported by Teladoc.)

Of greater significance, studies performed for Teladoc have not simply trusted patient self-reporting. In one study, comparing the behavior or Teladoc users and non-users for a large employee group, Veracity concluded that Teladoc users actually under-reported their likely use of the E.R., and that Teladoc redirected care for almost 20% of patients who would otherwise have visited high cost emergency rooms. This Veracity study examined the behavior of almost 6,000 Teladoc patients for care delivered between May 2012 and December 2013.

If the RAND analysts intended their study to serve as a guide for decision-makers, they could—at minimum—have addressed possible reasons for the differences in their findings and those of Veracity Analytics, which examined far larger populations. Several of the Veracity studies are publicly available on Teladoc’s website, enabling anyone to criticize (as appropriate) the methodology involved. Instead, the RAND analysts effectively pretended these studies did not exist. Why?

Conclusion: Is there any relevance to the RAND study?

The current RAND study ends with a discussion of its implications for policy makers. However, the many issues discussed here suggest that, policy makers and decision-makers of all types, are best served by considering its results with the highest possible skepticism.

As discussed here:

  • The RAND researchers limited their analysis to a small, non-random sample size when a larger, more representative sample appeared to be available.
  • The study rejected an incident of care analysis, previously used by several of these same researchers, which most likely reduced the cost savings associated with telehealth and may have influenced the findings related to new usage versus redirected care.
  • The study inexplicably associated long “waiting-room” cost to telehealth, which strongly suggests a lack of objectivity on the part of the study’s authors.
  • The study failed to mention the presence of telehealth visit fees or distinguish between the additional costs borne by individuals as opposed to third-party payors, potentially leading readers to draw incorrect conclusions.
  • The study makes no attempt to explain why its findings are so different than similar analyses of larger populations of telehealth users.
  • The study examined a small sample of telehealth use when the industry was in it’s infancy. Since then, millions of telehealth visits have occurred with high user satisfaction, and telehealth is increasingly recognized as a mainstream form of healthcare delivery.
  • The study examined the value of telehealth several years ago and the entire system for delivering healthcare in the U.S. has shifted in the intervening years.

When you put this all together, it appears that there’s little reason to give validity to this RAND analysis.

I welcome comments and discussion from anyone—in academia, medicine, or industry—who can say why the above analysis lacks merit, or otherwise add value to this article. In addition, I have written the authors of this recent RAND study, asking for comments on the above analysis.

Popular in the Community

Close

What's Hot