Mathematica-mpr.com is now Mathematica.org. Please update your bookmarks. Learn more about this change.
Mathematica is committed to building local capacity in all aspects of research—from design and data collection to analysis and report writing. We work collaboratively with local institutions, government agencies, and consultants around the world to train staff and build capacity for program monitoring and rigorous evaluation of social programs.
In several of our studies, we have conducted workshops on impact evaluation designs. We conduct presentations for government staff and other key stakeholders to enhance their acceptance of randomized experiments as a rigorous evaluation method and more broadly to provide stakeholders with the knowledge to achieve enhanced buy-in for evaluation strategies. We have also provided technical assistance to local evaluators on evaluation design and other technical issues.
Our LAC Reads evaluation project will identify and work with local researchers as co-principal investigators on each of our evaluations of early-grade reading interventions in the countries identified by USAID. In addition to working closely with and mentoring local researchers, we will conduct formal training in impact evaluation at the beginning of each evaluation for all local stakeholders, including the staffs of the Ministry of Education, implementing partners, and local research and data collection partners. Similarly, in many of our impact evaluations for the Millennium Challenge Corporation (MCC), we provide training in impact evaluation for in-country stakeholders. For example, for our work with MCC in Armenia, we conducted presentations for government staff and other key stakeholders to enhance their acceptance of a randomized experiment as a rigorous evaluation method. We also conducted training for Armenia’s National Statistical Service staff on best practices for collecting high quality household survey data.
In June 2012, Mathematica conducted a one day workshop on impact evaluation in Niamey, Niger, with local education stakeholders. The workshop provided a broad overview of the impact evaluation topics such that stakeholders could consider the best option for the evaluation of the NECS program. Sessions included an introduction to impact evaluation, causal inference and the counterfactual, experimental methods, non-experimental methods, and sampling and power. Participants in the workshop included the Ministry of Education, MCC, MCA Niger, USAID, Plan International, Aide et Action, and Volontaires pour L'Intégration Educative (VIE KANDE NI BAYRA).
In our work with Ukraine’s Ministry of Labour and Social Policy (MLSP), we developed a two day training on impact evaluation design and performance monitoring for staff, using examples from the MLSP’s social protection program evaluation efforts. In the Democratic Republic of the Congo (DRC), we prepared a training course for USAID and contractor staff on monitoring and evaluation. As part of our work on the evaluation of Jamaica’s PATH safety net reform, we provided a two-day training on impact evaluation, process evaluation, and program monitoring to various government stakeholders with examples from the PATH program evaluation. For staff in Mexico’s Social Development Ministry, we conducted a one-day course on impact evaluation design. Our study for Mexico’s Human Development Ministry provided technical assistance to local evaluators on evaluation design and other technical issues.