Mathematica is continuously monitoring this fluid situation, and we are proactively working to minimize any potential impacts on our clients, partners, staff, and the important work that we do together. Learn more.
Monitoring Evaluation and Learning
Monitoring, evaluation, and learning (MEL) frameworks provide tools to help grant makers evaluate and improve programs that include multiple grantees pursuing similar objectives through different means or contexts.
MEL frameworks often include logic models, evaluation matrices, and learning products.
Logic models show how projects are meant to work, what activities must come before others, and how to achieve outcomes. Logic models foster a common understanding among grant makers, grantees, and MEL partners. They also help generate monitoring and evaluation questions and learning objectives.
Evaluation matrices show explicit relationships between monitoring and evaluation questions, indicators and measures used to answer these questions, sources of data for the indicators and measures, and analytic methods.
Written products and facilitated discussions, often used in combination, are geared toward timely, actionable learning among selected stakeholders.
Foundations, government agencies, and other grant makers can use MEL when they need a systematic way to monitor comprehensively, evaluate selectively, and learn continuously to support their programs or initiatives.
- Monitoring comprehensively means paying attention to project objectives, theories of change, implementation plans, and key performance indicators to illuminate the successes and challenges grantees face, whether collectively or individually.
- Evaluating selectively means taking a deeper analytic look at a subset of projects, chosen for using innovative strategies, achieving desired outcomes, or operating in a diverse set of contexts.
- Learning continuously means having relevant, timely information as grant initiatives unfold and implementation improvements are still possible.