This guide walks researchers through the key steps of applying BASIE, including selecting prior evidence, reporting impact estimates, interpreting impact estimates, and conducting sensitivity analyses.
Related Content
Related Publications for John Deke
-
The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations: A Practical Guide for Education ResearchersApr 29, 2022
-
Impacts of a Home Visiting Program Enhanced with Content on Healthy Birth SpacingSep 24, 2020
This study sought to determine the impact of Healthy Families Healthy Futures (HFHF) enhanced with Steps to Success (STS).
-
Putting Rigorous Evidence Within Reach: Lessons Learned from the New Heights EvaluationSep 01, 2020
This article uses an evaluation of New Heights, a school-based program for pregnant and parenting teens in the District of Columbia Public Schools, to illustrate how maternal and child health programs can obtain rigorous evaluations at reasonable cost using extant administrative data.
-
The Effects of a Principal Professional Development Program Focused on Instructional Leadership (Study Highlights)Oct 30, 2019
Helping principals improve their leadership practices is a common use of federal funds and one way to improve instruction and student achievement.
-
The Effects of a Principal Professional Development Program Focused on Instructional LeadershipOct 30, 2019
The Institute of Education Sciences conducted a random assignment study of an intensive professional development program for elementary school principals.
-
Moving Beyond Statistical Significance: The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact EvaluationsMar 15, 2019
This brief describes an alternative framework for interpreting impact estimates, known as the BAyeSian Interpretation of Estimates (BASIE).
-
Causal Validity Considerations for Including High Quality Non-Experimental Evidence in Systematic ReviewsJun 30, 2018
Federally funded systematic reviews of research evidence play a central role in efforts to base policy decisions on evidence.
-
Asymdystopia: The Threat of Small Biases in Evaluations of Education Interventions that Need to be Powered to Detect Small ImpactsOct 03, 2017
The authors examine the potential for small biases to increase the risk of making false inferences as studies are powered to detect smaller impacts and recommend strategies researchers can use to avoid or mitigate these biases.
-
The New Heights Evaluation: The Impact of New Heights on Closing the Achievement GapJun 29, 2017
This brief summarizes the impacts of New Heights, a program that serves expectant and parenting teens, in Washington D.C. public schools (DCPS). The program is being evaluated as part of the Positive Adolescent Futures study, funded by the Office of Adolescent Health (OAH).
-
Matched Comparison Group Design Standards in Systematic Reviews of Early Childhood InterventionsJun 01, 2017
Systematic reviews that assess the quality of research on program effectiveness can help decision makers faced with many intervention options.
-
Raising the Bar: Impacts and Implementation of the New Heights Program for Expectant and Parenting Teens in Washington, DCApr 24, 2017
This report shares the findings from an impact and implementation study of New Heights, a DC Public Schools program that provides a multi-faceted approach for supporting parenting students’ educational attainment.
-
The WWC Attrition Standard: Sensitivity to Assumptions and Opportunities for Refining and Adapting to New ContextsApr 01, 2017
The purpose of this article is to explain the WWC attrition model, how that model is used to establish attrition bounds, and to assess the sensitivity of attrition bounds to key parameter values.
-
School Improvement Grants: Implementation and Effectiveness (In Focus Brief)Jan 18, 2017
This brief summarizes findings from a new report from Mathematica’s multiyear evaluation of School Improvement Grants (SIG) for the Department of Education’s Institute of Education Sciences. It describes the practices schools used and examines the impact of SIG on student achievement.
-
School Improvement Grants: Implementation and Effectiveness (Executive Summary)Jan 18, 2017
This executive summary describes key findings from a report from Mathematica’s multiyear evaluation of School Improvement Grants (SIG) for the Department of Education’s Institute of Education Sciences. It describes the practices schools used and examines the impact of SIG on student achievement.
-
School Improvement Grants: Implementation and Effectiveness (Final Report)Jan 18, 2017
This report summarizes findings from Mathematica’s multiyear evaluation of School Improvement Grants (SIG) for the Department of Education’s Institute of Education Sciences. It describes the practices schools used and examines the impact of SIG on student achievement.
-
Race to the Top: Implementation and Relationship to Student OutcomesOct 26, 2016
This report summarizes findings from Mathematica’s multiyear evaluation of Race to the Top (RTT) for the Department of Education’s Institute of Education Sciences.
-
Race to the Top: Implementation and Relationship to Student Outcomes (In Focus)Oct 26, 2016
This brief summarizes findings from a new report from Mathematica’s multiyear evaluation of Race to the Top (RTT) for the Department of Education’s Institute of Education Sciences.
-
Race to the Top: Implementation and Relationship to Student Outcomes (Executive Summary)Oct 26, 2016
This executive summary describes key findings from a report from Mathematica’s multiyear evaluation of Race to the Top (RTT) for the Department of Education’s Institute of Education Sciences.
-
Design and Analysis Considerations for Cluster Randomized Controlled Trials That Have a Small Number of ClustersOct 01, 2016
Cluster randomized controlled trials (CRCTs) often require a large number of clusters in order to detect small effects with high probability.
-
Addressing Attrition Bias in Randomized Controlled Trials: Considerations for Systematic Evidence ReviewsJul 30, 2015
This report examines the Home Visiting Evidence of Effectiveness Review's attrition standard and how its boundary responds to changes in two fundamental assumptions: (1) the correlation between outcomes and attrition, and (2) the level of attrition bias deemed acceptable.
-
Using the Linear Probability Model to Estimate Impacts on Binary Outcomes in Randomized Controlled TrialsDec 30, 2014
In this brief we examine methodological criticisms of the Linear Probability Model (LPM) in general and conclude that these criticisms are not relevant to experimental impact analysis.
-
Understanding Variation in Treatment Effects in Education Impact Evaluations: An Overview of Quantitative MethodsMay 22, 2014
Variation in treatment effects has important implications for education practice—and for facilitating the most efficient use of limited resources—by informing decisions about how best to target interventions and how to improve the design or implementation of interventions.
-
Effectiveness of Supplemental Educational ServicesApr 30, 2014
One of the modifications of the Elementary and Secondary Education Act (known as the No Child Left Behind Act) gave parents of low-income students in low-performing schools a choice of Supplemental Educational Services (SEdS).
-
Frequently Asked Questions About the Implications of Clustering in Clustered Randomized Controlled Trials (RCTs)Dec 30, 2013
This update answers frequently asked questions about the implications of clustering in clustered randomized controlled trials (RCTs).
-
Coping with Missing Data in Randomized Controlled TrialsMay 30, 2013
In this brief we describe strategies for coping with missing data in RCTs.
-
Frequently Asked Questions: Evaluation Start-UpJul 30, 2011
This update features answers to frequently asked questions about starting an effectiveness evaluation, such as gathering consent and collecting baseline data.
-
Precision Gains from Publically Available School Proficiency Measures Compared to Study-Collected Test Scores in Education Cluster-Randomized TrialsOct 30, 2010
This paper compares the precision gains from adjusting impact estimates for student-level pretest scores (which can be costly to collect) with the gains associated with using publically available school-level proficiency data (available at low cost), using data from five large-scale randomized control...
-
The Effectiveness of Mandatory-Random Student Drug TestingJul 30, 2010
This report presents findings from an evaluation of the mandatory-random student drug testing (MRSDT) programs in 36 high schools from seven districts that received grants from the U.S. Department of Education’s Office of Safe and Drug-Free Schools in 2006.
-
Effectiveness of Selected Supplemental Reading Comprehension Interventions: Findings from Two Student CohortsMay 30, 2010
This study evaluated the effectiveness of four supplemental reading comprehension programs in helping disadvantaged fifth graders improve reading comprehension.