Internal and External Validity of the Comparative Interrupted Time-Series Design: A Meta-Analysis
This paper meta-analyzes 12 heterogeneous studies that examine bias in the comparative interrupted time-series design (CITS) that is often used to evaluate the effects of social policy interventions. To measure bias, each CITS impact estimate was differenced from the estimate derived from a theoretically unbiased causal benchmark study that tested the same hypothesis with the same treatment group, outcome data, and estimand. In 10 studies, the benchmark was a randomized experiment and in the other two it was a regression-discontinuity study. Analyses revealed the average standardized CITS bias to be between −0.01 and 0.042 standard deviations; and all but one bias estimate from individual studies fell within 0.10 standard deviations of its benchmark, indicating that the near zero mean bias did not result from averaging many large single study differences. The low mean and generally tight distribution of individual bias estimates suggest that CITS studies are worth recommending for future causal hypothesis tests because: (1) over the studies examined, they generally resulted in high internal validity; and (2) they also promise high external validity because the empirical tests we synthesized occurred across a wide variety of settings, times, interventions, and outcomes.
How do you apply evidence?
Take our quick four-question survey to help us curate evidence and insights that serve you.Take our survey