Independence at Home Evaluation Findings Do Not Support Creating a Permanent Medicare Program

Independence at Home Evaluation Findings Do Not Support Creating a Permanent Medicare Program

Letter to the Editor
Published: Nov 03, 2023
Publisher: Journal of the American Geriatrics Society (online ahead of print)
Download
Associated Project

Evaluation of the Independence at Home Demonstration

Time frame: 2012-2023

Prepared for:

U.S. Department of Health and Human Services, Center for Medicare & Medicaid Innovation

Deligiannidis et al.'s commentary about the Centers for Medicare & Medicaid Services' (CMS) Independence at Home (IAH) demonstration uses findings from the evaluation to claim that IAH has convincingly reduced spending for their patients, and therefore a new permanent Medicare program ought to be made available to home-based primary care (HBPC) practices. Deligiannidis et al. either misunderstand or misconstrue the evaluation's results, as the evaluation does not suggest that a new permanent Medicare program for HBPC practices similar to IAH would produce large Medicare savings. As the independent evaluator of the IAH demonstration, we would like to offer responses to some of their misleading statements based on Mathematica's reports available on CMS's website.

The evaluation found no compelling evidence that the IAH payment incentive measurably reduced spending in Years 1–6. We agree with Deligiannidis et al. that HBPC from IAH practices might have provided greater value in 2020, the first year of the COVID-19 pandemic, since our latest evaluation report found total spending reductions in Year 7. We were careful, however, not to generalize Year 7 results beyond 2020 because of the pandemic.

Deligiannidis et al. define “operational savings” as the difference each year in actual and target spending. Target spending from a projected trend is not a credible counterfactual for determining savings. Establishing a credible counterfactual requires a rigorous study design that minimizes likely sources of bias and selecting a comparison group that approximates the trend in spending IAH patients would otherwise be expected to incur. The difference-in-differences study design used throughout the evaluation is the optimal design in lieu of randomization. We used administrative data to identify patients for the IAH and comparison groups, who lived in the same areas and were similar on many measurable demographic, functional status, and health status characteristics.

Furthermore, Deligiannidis et al. limit their assessment of “evaluation savings” to the difference in spending between the IAH and comparison patients in each performance year, which is incorrect and misleading because it does not account for the difference in spending between the two groups before the demonstration began in 2012. They claim that the difference in spending between the two groups in each performance year provides evidence that the demonstration “incentivized initially lower-performing practices to improve their value,” but the pre-demonstration difference is not a measure of the value of HBPC. Since IAH practices had been providing HBPC before 2012, a pre-demonstration difference suggests that there were preexisting, unmeasurable factors correlated with receiving HBPC from an IAH practice and spending that confound the spending differences during the demonstration. The evaluation accounted for these differences by subtracting this adjusted, pre-demonstration difference from the adjusted difference in spending between the IAH and comparison group in each performance year. This component of the evaluation does not answer the question of whether HBPC from IAH practices reduced Medicare spending relative to usual care before the payment incentive began in 2012. However, we separately studied the effects of HBPC from IAH practices before 2012; we did not find evidence that HBPC from IAH practices reduced spending.

Aside from Deligiannidis et al.'s misinterpretation of the evaluation's results, the evaluation results lack generalizability to other HBPC practices even when correctly interpreted because of the small number of IAH participants (14, including 5 from the same corporation). Among this small group, results have varied considerably (by more than two-thirds in Year 5, for example), which underscores the lack of generalizability.

There are many reasons to believe that HBPC delivered by IAH practices has enhanced patient-centered care for homebound patients, particularly during the first year of the COVID-19 pandemic. After 7 years, however, the evaluation has not produced compelling evidence of Medicare savings due to HBPC from the IAH practices or the IAH payment incentive, especially after accounting for incentive payments made to practices, which Deligiannidis et al. ignore. Additionally, the start of IAH predated several care management and related services now reimbursed under fee-for-service Medicare, suggesting that the IAH payment incentive may be duplicative. Furthermore, IAH, as a one-sided shared savings model for small practices, has not been successful based on participation trends alone—decreasing from 18 participants in Year 1–12 in Year 6 and just 1 in Year 10—so it is unlikely to be viable in a different form. In sum, 7 years of evidence from IAH does not suggest that making a new permanent Medicare program available to other HBPC practices after the COVID-19 pandemic is likely to considerably reduce Medicare spending.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey