MONITORING AND EVALUATION

Monitoring and Evaluation - Tools and Frameworks đź“š
TOUCH
Download Guide

MONITORING AND EVALUATION

Monitoring and evaluation (often called M&E ) is a combination of data collection and analysis (monitoring) and assessing to what extent a program or intervention has, or has not, met its objectives (evaluation).  Monitoring and Evaluation have been used to assess the performance of the project, program, and social initiatives.

The overwhelming number of Monitoring or Evaluation ( M&E ) has been time-consuming and limited to reporting  activity and output report which is not more than "impact justification".   While this article simply documents M & E terms, those who are looking to innovate and scale social impact should refer to our "Impact Measurement" resources.

 

 

DEFINITION AND HISTORY

WHAT IS MONITORING?

Monitoring is periodic and continuous, conducted after program initiation and during the duration of that program or intervention. The data acquired is primarily input- and output-focused and is generally used as an ongoing strategy to determine efficiency of implementation. For example, an NGO delivering training for school teachers might track monthly the number of sites visited, trainings delivered, the number of teachers trained, etc.

Key questions to consider for monitoring strategy include:

  • What key metrics can give us an idea of the state of implementation?
  • Do we have lean data collection and analysis processes?
  • How efficiently are we implementing our program(s)?
  • Based on the data acquired, do we need to make any changes to our program(s)?

A monitoring plan usually focuses on the processes occurring during the implementation of a program. These can include tracking the following during defined periods of time:

  • When programs were implemented
  • The location or region in which programs were delivered
  • Which departments or teams delivered activities
  • How often certain activities occurred
  • Number of people reached through a programs’ activities
  • Number of products delivered (or number of hours of a service)
  • Costs of program implementation

 

WHAT IS EVALUATION?

A program evaluation focuses on the performance of the intervention and is principally used to determine whether beneficiaries really have benefited due to those activities. It generally looks at outcomes, assessing whether a change occurred between the outset and termination of an intervention (or at least between two specific time periods). Ideally, that change should be able to be attributed to the activities undertaken.

Key questions that an evaluation considers:

  • Did our activities make a measurable difference in our target beneficiary group(s)?
  • How much can the changes observed be attributed to our activities?
  • What contributed to our success (or failure)?
  • Can we scale observed changes? Or replicate in other contexts?
  • Did we achieve impacts in a cost-effective way?
  • Have any unexpected results occurred?

At the outset of a program it is important to acquire baseline data, which will be used to compare progress at every evaluation interval and at the end of the program period. When thinking about how to measure for outcomes (changes that have occurred), consider the following key elements:

  • Understand how your inputs, outputs, activities, etc. generate change (see Theory of Change)
  • Design your evaluation plan (i.e. research plan) before launching a program or intervention
  • Use outcomes that are relevant for your beneficiaries
  • Use data collection methods that fit the needs of beneficiaries and the skills of your employees
  • Incentivize beneficiaries to provide you with data at key intervals
  • Ensure you have adequate data management and analysis tools (and people who know how to use them)
IMPORTANCE OF M&E

MONITORING AND EVALUATION BENEFITS

When done effectively, monitoring and evaluation implementation reaps benefits for stakeholders up and down the spectrum of activities carried about by an organization. In general, it guides strategic decision-making both during and after program execution.

Benefits for different stakeholders, including:

  • Beneficiaries: Follow-up processes (data collection) can signify that the organization actually cares about results and improves outcomes. Data can be used to improve the efficiency of implementation as well as implementation design (to improve outcomes for beneficiaries)
  • Employees: M&E can generate more buy-in and trust in the organization's commitment to the mission if there is a clear effort to not only assess progress but use that assessment to get better at delivering impact. For employees in contact with beneficiaries (e.g., "on the ground"), conducting evaluation assessments can also generate more trust between those employees and the beneficiary community.

    Now, often unforeseen, insights can emerge, helping employees discover new, more effective ways to deliver programs and create impact.
  • Executive management: Determining changes to strategic direction becomes much more data-driven with the ongoing data and analyses from M&E processes. Adaption ideally becomes more agile. With relevant and comprehensive data (both process-related and impact-related) executives can build much more persuasive arguments.
  • Funders:  Money for impact flows to where the data is and good M&E implementation can open up that flow because it breeds impact credibility and of course a more transparent understanding of how much impact can be generated per investment dollar.

REAL-WORLD MONITORING AND EVALUATION FRAMEWORK

The Multidimensional Poverty Assessment Tool (MPAT) 

*Output to Outcome driven approach

* Multi-country output and outcome measurement

* Streamline offline data collection to real-time donor dashboard

HOW TO BUILD

MONITORING AND EVALUATION PLAN



Most organizations start with an impact strategy. Depending on their background they will design an outcome-oriented approach

  • Theory of change
  • Logic Model
  • Log Frame
  • Results based accounting

The next step is to design quality metrics.

Putting quantitative (or qualitative) tools to work means defining the right indicators to measure. An indicator is a metric used to measure some aspect of a program. In the planning stages, the indicators that will be used throughout the monitoring and evaluation processes should be defined. This enables organizations to truly measure the extent to which what they think or want to happen actually happens.

Indicators can be both quantitative and qualitative, depending on what needs to measure and in what ways.

Quantitative Indicators

 
Primarily output-focused, they help organizations determine if activities are taking place, when, and to what extent.
By definition, numbers are used to communicate quantitative measures (percentages, ratios, $ sums, etc.)

Qualitative Indicators

Involve subjective terms 

Often outcome-focused, they can help organizations determine if a change has occurred by gathering perceptions from beneficiaries. 

Data accuracy can often be difficult to assess given the subjective nature of the collecting judgments about change (see example below)

 

Examples of M&E indicators

Using the example of a social enterprise that employs a 1-for-1 model (you buy a pair of shoes, we donate a pair of shoes to a person in need) we can examine some potential indicators for their donation program over a period of one year.

Quantitative

  • Number of shoes donated
  • Number of lives affected
  • Amount of money saved in the beneficiary group (not having to buy shoes)

Qualitative

  • Perception of change in the quality of life after receiving shoes (survey beneficiaries)
  • Types of opportunities generated by the reception of shoes (defined by beneficiaries)

Using a combination of Indicators to Determine Attribution

As we can see, a pure count of shoes donated doesn't tell us what impact has been generated. It only implies. By also collecting qualitative, outcomes-focused data, the organization gets a better idea of the impacts of those shoes on people who did not have them. They could also measure income level before and after the shoes (for adults) or measure the number of school days attended (for children).

The best indicators help organizations clarify a clear attribution between the intervention (shoe distributed) and the impact(s) generated. In this example, many other variables could contribute to increased income level or school days attended. Gathering qualitative data, specifically asking to what extent the shoes had to do with any observed changes in those areas, would increase the level of attribution the organization might report.

Streamline Monitoring and Evaluation Framework

More than 700 million people, or 10% of the world population, still live in extreme poverty and struggle to fulfill the most basic needs like health, education, and access to water and sanitation. The Multidimensional Poverty Assessment Tool (MPAT) provides deep insight into the front line managers a clearer understanding of rural poverty at the household and village level. MPAT can significantly strengthen the planning, design, monitoring, and evaluation of a project and contribute to rural poverty reduction. However, there are many practical challenges to implementing and streamline end-to-end data collection to reporting.

  • How do we create monitoring and evaluation tools and framework that aligns with SDG Impact?
  • How do Food For The Poor chose their monitoring and evaluation indicators? "Food For The Poor" works to uplift poor children, families, and communities in need by providing essentials and long-term development opportunities.
  • They work in 18 Latin American and Caribbean countries.
  • How are they managing their Outcomes? 
  • How do we streamline offline data collection to real-time dashboards?
  • How to automate scoring that aligns with MPAT calculation
  • How to align with SDG and share with donors?
IMPLEMENTATION

TYPES OF EVALUATION

Evaluation Methods and Evaluation Design

Determining an evaluation method requires first defining the unique objectives of the organization or specific program. In general, those objectives are determined by asking: What is the impact being sought? What is the purpose of the evaluation?
In addition to those objectives, other variables include the context of the target beneficiary group and the organizational resources available (money, skills, tools, etc.). Of course, the focus should be given to achieving the overarching objectives.
While not an exhaustive list, the following are some examples of the main evaluation approaches that accomplish unique organizational objectives.
  • Formative Evaluation
  • Process Evaluation
  • Outcome Evaluation
  • Impact Evaluation

A formative evaluation will most often be conducted before a program begins to examine both feasibility and determine its relevance to the overall organization's strategic objectives. This can also occur during program implementation, especially if there is a need to modify the program. At this point, the formative evaluation can be used to assess the feasibility of a new design.

Part of the importance of a formative evaluation can improve a program’s probability of success because it encourages practitioners to confirm viability and detect potential problem areas at the outset while also promoting accountability during implementation.

A process evaluation is carried out during the implementation phase of a program. As the name suggests, it focuses on processes being carried out -- inputs, activities, outputs, etc. It identifies any issues with the efficiency of implementation.


For example, it can establish whether targets were not met because of lack of human resources (skills) or appropriate tools, or unforeseen contextual obstacles (e.g., beneficiaries lacked time to engage with the program), which ultimately affected program outcomes.

If there don’t seem to be process-related issues, the evaluation can help illuminate issues with the change model itself, encouraging a needed rethinking of how to affect change for the target group.

The use of periodic assessments during implementation is one of the most important processes evaluation components. This allows organizations to re-design if needed during execution to increase reach, re-allocate resources, etc.

An outcome evaluation aims to determine whether overall program objectives have been met. In that process, practitioners also identify what might have spurred or limited those changes. Finally, it helps shed light on unexpected changes in the target beneficiary population at the end of the intervention or at the point of evaluation.

Given that scope, an outcome evaluation generally looks at a program’s results over a longer period of time (although this also depends on how quickly or slowly the change is expected to occur).

It can also help pinpoint which areas of a program were more or less effective than others. Most importantly, with an outcome evaluation, an organization determines whether there was a change in beneficiaries' lives.

For this reason, it can be important to use qualitative measures and participatory methods to extract from beneficiaries their perception of any observed changes.

 

An impact evaluation gets to the heart of a program’s true effectiveness by determining attribution or the extent to which the changes observed (outcomes) can be causally connected to the activities carried out during the program period.

Timing is an important element of any impact evaluation. Conducted too early, the intervention has not had time to create observable change. Conducted too late, it could reduce the usefulness of insights for informing decision-making.

Ultimately, an impact evaluation guides organizations to understand activities generated impacts and (based on the insights) understand what tweaks could be made to maximize an intervention's effectiveness in generating the desired outcomes.
DESIGNING

MONITORING AND EVALUATION DATA COLLECTION AND DATA AGGREGATION

Is your monitoring and evaluation system working for you?  Are you able to provide impact reports or grant reports to funders on time?   Do you have data sitting in many different systems that take a long time to build impact reports from?

While many nonprofits and social enterprises collect data, most of the data collected are left unused or not actionable.  A well-designed data strategy can significantly increase your information's integrity and usability, making it easier to make continuous improvements and tailor your direct-impact reporting to each specific funder.

This article will show you seven best practices that will help you simplify data aggregation from different sources and build funder-specific impact reports with ease.

Most monitoring and evaluation systems fail to provide scalable solutions to aggregate results regularly.  Those that do may require a significant amount of customization or manual data aggregation.

Start with your data Monitoring and evaluation strategy:
A complete data strategy will allow you to identify all your data sources and understand if any streams need to be merged or processed.

  • Define your metrics and align them with activity, output, and outcome.
  • Align metrics with your data

Align your results to funder-specific programs. Many organizations collect results from both online and offline systems. Often online data may be sourced from proprietary databases, applications, or data management systems like Salesforce. Many social sector organizations also rely on google sheets, excel, AirTable, smart sheets, etc.

Align your donors, programs, and theory of change

To streamline donor-specific reporting, start by associating your theory-of-change and impact program with the donor who funded them. Then, make sure to revisit your activities, output, and outcome from your theory-of-change-model. We recently released a video that goes more in-depth into this model if you need a refresher! Your goal is to ensure that all of these facets align with your data collection strategy,  regardless of where your data points originate from.

With this approach, you will have defined your in-points and out-points for each funder and can appropriately optimize your strategies. 

Build a simple data warehouse

The biggest challenge of donor-specific reporting is ensuring that all of your program-specific data can be aggregated in a central, accessible location.  

A data warehouse’s real value is realized when your sources' integrations are both well defined and clearly understood. And, unless your data doesn’t get updated frequently or require much cleanup, synchronization should happen in real-time, wherever feasible. 

“Smart Mapping” between an external data source and a data warehouse simplifies your standard data integration process. Sopact’s Impact Cloud, for example,  provides intelligent mobile data collection services that streamline the flow of valuable data from the field straight into your impact management system.

Collect Offline Data Electronically

Many beneficiaries, especially in developing countries, live in areas where internet connections may not be reliable. Even within developed countries, nonprofits or social enterprises may be running training, health camp, or capacity building; collecting data on paper adds an extra layer of potential error and complexity to their operations.  In most cases, collecting data on mobile devices helps reduce your workload when it comes to integrating back into your main database. A perfect example of this approach works through Impact Cloud, which provides an innovative way to push mobile data collection offline through KoboToolbox. The field or event manager collects data on Android devices, and as soon as an internet connection is established, offline data is automatically synchronized to Impact Cloud.  

Align Impact Metrics with Surveys

Impact Metrics are key to your story and the change that you are interested in measuring.  As an organization matures, it must move from activity and output based reporting to outcome-oriented reporting.  Surveys can play an important role in measuring your programs' outcomes, but they should only be designed after your impact metrics have been well defined. In fact, survey design should mirror the principles defined in your theory of change model and impact metrics.

Build a program table that directly allows collecting surveys or results through an online survey.  You can build a  program table in less than 30  minutes, and you’ll be ready to collect any data necessary for your cause to grow and succeed. 

Increase Your Spreadsheet’s IQ

As different programs may use more than one system, most organizations try to bring data into Excel or Google spreadsheets. They carefully reformat data regularly so that they can calculate specific results required by funders. 

You can streamline data collected from different sources into single data sources through smart field-mapping and database level formulas that can do complex calculations internally.  In the long run, this approach saves a tremendous amount of time and improves the efficiency of data collection and aggregation processes compared to traditional spreadsheet management. Your solution should allow you to define formulas once in the backend, select the appropriate formulas through a configurator, and apply it to the data processing at any time from the field or in the office as necessary. 

This allows your focus to shift over to the quality of your data collection processes. All your data aggregation and reporting will become automated. Impact cloud handles this entire process for you, so you can spend more time making the greatest personal impact.

excel-imm

Audit your Salesforce or pre-existing data collection systems:

Many systems such as Salesforce, Case Management, or Program data may be collecting data. Still, building optimized reports for your organization and funders through these tools may end up becoming more challenging than its worth. 

It may not be easy to combine data from multiple tables. Report building for your specific needs can become overly complex. Reporting for all agreed-upon metrics may come from multiple external sources that all need to be integrated.

Your monthly, quarterly, or annual reporting frequency may not line up with existing databases or tools, requiring manual adjustments.

While systems such as Salesforce can be robust, it may be difficult to summarize results or create comprehensive reports specific to your cause.  Often funders may require qualitative reporting that may not be part of the Salesforce like systems.  This could increase the reporting time. If you’ve worked with these systems when reporting before, take some time to isolate the choke points for both your data and your workflow, and determine if you’re better off a building or finding a dedicated solution to save you time and headaches when building reports.

IMPLEMENTATION

MONITORING AND EVALUATION REPORTING

When people in our field talk about creating data visualizations to discover trends and make decisions, they’re typically thinking about collecting their data on a spreadsheet and feeding it into common visualization tools like POWERBI or Tableau. 

While this approach is definitely a great place to begin, you WILL run into trouble when your goal is to visualize and continuously learn from data collected periodically.  Most of these common tools are not designed to work flexibly with the data demands of an impact-driven organization, where changes in data-types, time-periods, and more can require significant modifications to your original setup to derive useful insights.  This results in much wasted time and/or a significant amount of technical expertise required to recreate and maintain your visualizations.

A properly set up continuous learning and improvement system should be flexible and robust enough to show you the change in results and emerging trends even as your data collection strategies evolve.

In the previous section, we gave you some of the best approaches to streamlining your data collection and monitoring strategies - you can find a link to that video in the description below - so the next step is to learn how you can get the most out of your data when working to improve your efforts across the board.

Without further ado, let’s jump into 4 ways to maximize your approach to data visualization and reporting so that you can focus on your program’s cycle of continuous learning and improvement.

Merge your data collection and visualization tools into 1

Social enterprises and non-profits that work directly towards making their stakeholders' lives better need a system that tightly integrates data collection and data analysis tools to implement a continuous learning and monitoring system. 

Often, setting up a system using off the shelf products requires technical integration, which may not be readily available at a social enterprise or a non-profit. Not to mention the time it takes to put it all together. 

Your solution also needs to be preemptively built to handle changes in data collection strategies. Otherwise, you will have to retrace your steps to rebuild your visualizations and reports when incorporating new information. 

This is obviously complex, time-consuming, and can seriously hamper your organization's ability to learn from the data and change intervention methodologies. 

As a baseline, you can create a systematic workflow between your data collection and visualization tools that your entire organization follows and lives by. 

However, your best bet is to look into a purpose-built tool like Impact cloud, which does all of the integration and legwork for you.  

Streamline your Donor reports

Often nonprofits have many different funding sources - including Individual and corporate Donors,  Government agencies, and Grant Makers.  Each funder is likely to have differing goals and requirements for impact reporting, which creates significant complexity for its overall data strategy. 

Unfortunately, most organizations do not have the technical capacity or time to cater to unique funding requirements, data collection, and reporting tools to each of their funders.

Your organization’s solution to this reporting overload is simple: alignment.  While you may have many donors to report to, odds are there is much overlap between what your desired impact is and what each donor is looking for. 

By working backward and aligning your Theory of change with your program and the donors who fund it, you will streamline your reports down to the data points and information that matters across all of the organizations involved. 

This approach will allow you to ensure that your activities, output, and outcome from your theory of change model align with your data collection regardless of where your data originates from or where it is going.

This will save you significant amounts of time and headache in the long run and result in higher quality reports to your funders.

Build an effective reporting framework

For organizations that are collecting data from multiple sources such as different programs, locations, or systems, the process of stitching everything together in a way that aligns with aggregated reporting requirements and ultimately with global goals can be a monumental task.

Your organization will need to systematically develop its own effective reporting framework that maps Global or SDG goals to your organizations’ own metrics. 

Build specific reports beyond visualizations

While visualization is an important component in conveying results to a stakeholder, funders are typically looking for more detailed insight. Your organization’s reporting system should be designed to be able to provide detailed reports: efficiently.

  • A Comparison of results between similar grantees or investees
  • Geographical results comparisons
  • Overall portfolio alignment with SDG Goals
  • Progress towards internal impact or SDG goals
  • Overall portfolio comparison
  • A Comparison between target, forecast and actual value
  • An Impact Scorecard
  • Impact Management Project-Based Reporting 
  • And, Narrative oriented reporting such as annual reporting.

Depending on your stakeholder reporting, a system must provide flexible, accurate, and comprehensive insights that inform effective learning and decisions.

 By ensuring that you can handle requests like these from the top of your data collection strategy down to your report builder, you will never be caught in a situation where you can’t access the specific information a funder needs to help keep your programs running and thriving.

We understand that organizations just like yours face challenges with impact management daily. So we’ve developed a platform that streamlines the process of impact management. Impact Cloud is a comprehensive solution that eliminates the need to integrate many tools from impact strategy, data collection, program data management, data analysis, dashboarding, reporting, and more.

With this approach, you will  -

  • Save months worth of reporting time
  • Focus on high-quality data collection over number-crunching
  • And, Align your reporting to each funder’s metrics and reporting frequency

 

M&E RELATED LINKS

MONITORING AND EVALUATION RESOURCES

CONCLUSION

Monitoring and Evaluation implemented properly can bring better outcomes from the stakeholder and better return on investment.  A better end to end monitoring and evaluation platform can bring better results outlined in this article. 

For more details on how to select the right monitoring and evaluation tool, please read more here.