RD or Not RD: Using Experimental Studies to Assess the Performance of the Regression Discontinuity Approach

RD or Not RD: Using Experimental Studies to Assess the Performance of the Regression Discontinuity Approach

Published: Feb 28, 2018
Publisher: Evaluation Review, vol. 42, no.
Download
Authors

Alexandra Resch

Background

This article explores the performance of regression discontinuity (RD) designs for measuring program impacts using a synthetic within-study comparison design. We generate synthetic RD data sets from experimental data sets from two recent evaluations of educational interventions—the Educational Technology Study and the Teach for America Study—and compare the RD impact estimates to the experimental estimates of the same intervention.

Objectives

This article examines the performance of the RD estimator with the design is well implemented and also examines the extent of bias introduced by manipulation of the assignment variable in an RD design.

Research Design

We simulate RD analysis files by selectively dropping observations from the original experimental data files. We then compare impact estimates based on this RD design with those from the original experimental study. Finally, we simulate a situation in which some students manipulate the value of the assignment variable to receive treatment and compare RD estimates with and without manipulation.

Results and Conclusion

RD and experimental estimators produce impact estimates that are not significantly different from one another and have a similar magnitude, on average. Manipulation of the assignment variable can substantially influence RD impact estimates, particularly if manipulation is related to the outcome and occurs close to the assignment variable’s cutoff value.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey