Testing the Replicability of a Successful Care Management Program: Results from a Randomized Trial and Likely Explanations for Why Impacts Did Not Replicate
To test whether a care management program could replicate its success in an earlier trial and determine likely explanations for why it did not.
Medicare claims and nurse contact data for Medicare fee-for-service beneficiaries with chronic illnesses enrolled in the trial in eastern Pennsylvania (N = 483).
A randomized trial with half of enrollees receiving intensive care management services and half receiving usual care. We developed and tested hypotheses for why impacts declined.
All outcomes and covariates were derived from claims and the nurse contact data.
From 2010 to 2014, the program did not reduce hospitalizations or generate Medicare savings to offset program fees that averaged $260 per beneficiary per month. These estimates are statistically different (p < .05) from the large reductions in hospitalizations and spending in the first trial (2002–2010). The treatment–control differences in the second trial disappeared because the control group's risk-adjusted hospitalization rate improved, not because the treatment group's outcomes worsened.
Even if demonstrated in a randomized trial, successful results from one test may not replicate in other settings or time periods. Assessing whether gaps in care that the original program filled exist in other settings can help identify where earlier success is likely to replicate.
How do you apply evidence?
Take our quick four-question survey to help us curate evidence and insights that serve you.Take our survey