<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/0609320ad13c4183af7978b1c1744b71/pixel?tag=ViewContent&amp;noscript=1">
Monitoring & Evaluation

M&E Tools | Social Impact Evaluation Solutions | Best Practices

Ultimate Guide to Monitoring and Evaluation Tools

Improve Social Impact With Monitoring & Evaluation (MEL)

What is Monitoring & Evaluation (M&E)?


Monitoring and evaluation is a combination of data collection and analysis (monitoring) and assessing to what extent a program or intervention has, or has not, met its objectives (evaluation). 

What is monitoring?

Definition of Monitoring


monitoring and evaluation example

Monitoring is periodic and continuous, conducted after program initiation and during the duration of that program or intervention. The data 

acquired is primarily input- and output-focused and is generally used as an ongoing strategy to determine efficiency of implementation.

For example, an NGO delivering training for school teachers might track monthly the number of sites visited, trainings delivered, the number of teachers trained, etc.


Key questions to consider for monitoring strategy include:

  • What key metrics can give us an idea of the state of implementation?
  • Do we have lean data collection and analysis processes?
  • How efficiently are we implementing our program(s)?
  • Based on the data acquired, do we need to make any changes to our program(s)?

A monitoring plan usually focuses on the processes occurring during the implementation of a program. These can include tracking the following during defined periods of time:

  • When programs were implemented
  • The location or region in which programs were delivered
  • Which departments or teams delivered activities
  • How often certain activities occurred
  • Number of people reached through a programs’ activities
  • Number of products delivered (or number of hours of a service)
  • Costs of program implementation

Read More: How to do effect storytelling from Impact Learnings?



What is evaluation?

Definition of Evaluation

A program evaluation focuses on the performance of the intervention and is principally used to determine whether beneficiaries really have benefited due to those activities.

It generally looks at outcomes, assessing whether a change occurred between the outset and termination of an intervention (or at least between two specific time periods). Ideally, that change should be able to be attributed to the activities undertaken.

Key questions that an evaluation considers:

  • Did our activities make a measurable difference in our target beneficiary group(s)?
  • How much can the changes observed be attributed to our activities?
  • What contributed to our success (or failure)?
  • Can we scale observed changes? Or replicate in other contexts?
  • Did we achieve impacts in a cost-effective way?
  • Have any unexpected results occurred?

At the outset of a program it is important to acquire baseline data, which will be used to compare progress at every evaluation interval and at the end of the program period. When thinking about how to measure for outcomes (changes that have occurred), consider the following key elements:

  • Understand how your inputs, outputs, activities, etc. generate change (see Theory of Change)
  • Design your evaluation plan (i.e. research plan) before launching a program or intervention
  • Use outcomes that are relevant for your beneficiaries
  • Use data collection methods that fit the needs of beneficiaries and the skills of your employees
  • Incentivize beneficiaries to provide you with data at key intervals
  • Ensure you have adequate data management and analysis tools (and people who know how to use them)


Download Now: Actionable Impact Management Guides

What is Monitoring, Evaluation & Learning (MERL)?

An added step in how to monitor and evaluate a project

Monitoring, evaluation, and learning is simply taking the information and insights acquired in the first two steps (M&E) and using it to inform key strategic decisions. These informed decisions could be at a management level, a program design level, or anywhere in between.

The point is to put to use those data and the results obtained from analyses in order to breed accountability into an organization's activities and also into the M&E process itself.

Learning can take place during Monitoring, Evaluation, or both. It recognizes that programs need to be able to respond to the dynamic nature of impact-seeking activities. By applying data-driven insights at various intervals an organization can better achieve the outcomes it seeks for its beneficiaries.

The Importance of Monitoring and Evaluation

Benefits of Monitoring and Evaluation

Monitoring and Evaluation implementation, when done effectively, reaps benefits for stakeholders up and down the spectrum of activities carried about by an organization. In general, it provides guidance for strategic decision-making both during and after program execution.

Benefits for different stakeholders include:


Follow-up processes (data collection) can be a sign to them that the organization actually cares about results, and making outcomes better

Data can be used to improve efficiency of implementation as well as implementation design (to improve outcomes for beneficiaries)


M&E can generate more buy-in and trust in the organization's commitment to mission if there is a clear effort to not only assess progress, but use that assessment to get better at delivering impact

For employees in contact with beneficiaries (e.g. "on the ground") conducting evaluation assessment can also generate more trust between those employees and the beneficiary community.

New, often unforeseen, insights can emerge, helping employees discover new, more effective ways to deliver programs and create impact


Executive management

Determining changes to strategic direction becomes much more data-driven with the ongoing data and analyses from M&E processes. Adaption ideally becomes more agile.

With relevant and comprehensive data (both process-related and impact-related) executives can build much more persuasive arguments.



Money for impact flows to where the data is and good M&E implementation can open up that flow because it breeds impact credibility and of course a more transparent understanding of how much impact can be generated per investment dollar.


Read More: The Catch 22 of Social Impact Measurement 

Creating a Monitoring and Evaluation Plan

Quantitative Tools of Evaluation vs. Qualitative

quantitative and qualitative impact monitoring and evaluation


Putting quantitative (or qualitative) tools to work means defining the right indicators to measure. An indicator is a metric used to measure some aspect of a program. In the planning stages, the indicators that will be used throughout the monitoring and evaluation processes should be defined. This enables organizations to truly measure the extent to which what they think or want to happen actually happens.

Indicators can be both quantitative and qualitative, depending on what needs to measured and in what ways.


Quantitative M&E Indicators

Primarily output-focused, they help organizations determine if activities are taking place, when, and to what extent.

By definition, numbers are used to communicate quantitative measures (percentages, ratios, $ sums, etc.)


Qualitative M&E Indicators

Involve subjective terms 

Often outcome-focused, they can help organizations determine if a change has occurred by gathering perceptions from beneficiaries. 

Data accuracy can often be difficult to assess given the subjective nature of the collecting judgements about change (see example below)


 Examples of M&E indicators

Using the example of a social enterprise that employs a 1-for-1 model (you buy a pair of shoes, we donate a pair of shoes to a person in need) we can examine some potential indicators for their donation program over a period of one year.



  • Number of shoes donated
  • Number of lives affected
  • Amount of money saved in beneficiary group (not having to buy shoes)


  • Perception of change in quality of life after receiving shoes (survey beneficiaries)
  • Types of opportunities generated by reception of shoes (defined by beneficiaries)

Using a Combination of Indicators to Determine Attribution

As we can see, a pure count of shoes donated doesn't tell us what impact has been generated. It only implies. By also collecting qualitative, outcomes-focused data the organization gets a better idea of the impacts of those shoes for people who before did not have them. They could also measure income level before and after the shoes (for adults), or measure number of school days attended (for children).

The best indicators help organizations also make clear a clear attribution between the intervention (shoes given) and the impact(s) generated. In this example, there are many other variables that could contribute to an increase in income level or school days attended. Gathering qualitative data, specifically asking to what extent the shoes had to do with any observed changes in those areas would help to increase the level of attribution the organization might report.

Read More: Foundations of Social Impact Assessment 

Types of evaluation

List of Evaluation Methods and Evaluation Design

Evaluation Models Depend on Organizational Context

Determining an evaluation method requires first defining the unique objectives of the organization or specific program. In general, those objectives are determined by asking: What is the impact being sought? What is the purpose of the evaluation?

In addition to those objectives, other variables include, the context of the target beneficiary group and the organizational resources available (money, skills, tools, etc.). Of course, focus should be given to achieving the overarching objectives.

The following, while not an exhaustive list, are some examples of the main evaluation approaches which accomplish unique organizational objectives. 

Formative Evaluation

Definition of Formative Evaluation

A formative evaluation will most often be conducted before a program begins to examine both feasibility and determine its relevance to the strategic objectives of the overall organization. This can also occur during program implementation, especially if there is a need to modify the program, at which point the formative evaluation can be used to assess feasibility of a new design.

Part of the importance of a formative evaluation can improve a program’s probability of success because it encourages practitioners to confirm viability and detect potential problem areas at the outset, while also promoting accountability during implementation.

Process Evaluation

Definition of Process Evaluation

A process evaluation is carried out during the implementation phase of a program. As the name suggests, it focuses on processes being carried out -- inputs, activities, outputs, etc. It identifies any issues with the efficiency of implementation.

For example, it can establish whether targets were not met because of lack of human resources (skills) or appropriate tools, or unforeseen contextual obstacles (e.g. beneficiaries lacked time to engage with program) which ultimately affected program outcomes.

If there don’t seem to be process-related issues, the evaluation can help illuminate issues with the change model itself, encouraging a needed rethinking of how to affect change for the target group.

The use of periodic assessments during implementation is one of the most important process evaluation components. This allows organizations to re-design if needed during execution to increase reach, re-allocate resources, etc.

Outcome Evaluation

Definition of Outcome Evaluation

An outcome evaluation aims to determine whether overall program objectives have been met. In that process, practitioners also identify what might have spurred or limited those changes. Finally, it helps shed light on unexpected changes in the target beneficiary population at the end of the intervention or at the point of evaluation.

Given that scope, an outcome evaluation generally looks at a program’s results over longer period of time (although this also depends on how quickly or slowly the change is expected to occur).

It can also help pinpoint which areas of a program were more or less effective than others. Most importantly, with an outcome evaluation an organization determines whether there was a change in the lives of beneficiaries.

For this reason, it can be important to use qualitative measures and participatory methods to extract from beneficiaries their perception of any observed changes.

Impact Evaluation

Definition of Impact Evaluation

An impact evaluation gets to the heart of a program’s true effectiveness by determining attribution, or to what extent the changes observed (outcomes) can be causally connected to the activities carried out during the program period.

Timing is an important element of any impact evaluation. Conducted too early, the intervention has not had time to create observable change. Conducted too late, it could reduce the usefulness of insights for informing decision-making.

Ultimately, an impact evaluation guides organizations to not only understand how impacts were generated by activities but also (based on the insights) understand what tweaks could be made to maximize effectiveness of an intervention in generating the desired outcomes.

Monitoring & Evaluation Tools

List of Monitoring and Evaluation Tools

While every evaluation process is going to be unique to the organization, existing tools can help make that process more efficient and ultimately more successful. Below is a list of tools that help organizations manage their M&E processes.

Here is a detailed analysis of M & E software by different use cases

Sopact Impact Cloud ®

An end-to-end management solution for M&E needs, Impact Manager is a cloud-based platform that enables practitioners to house and manage data, conduct analyses, create reports, and collaborate across teams. It is designed to facilitate impact data management for organizations of all types and sizes.

  • Synergy Indicata
    While not as complete a solution, Indicata gives impact practitioners the tools necessary to execute data-driven processes in the M&E journey. Like the Impact Manager, Indicata can help with data collection, analytics, and visualization (through its dashboards features). 
  • Delta
    Directed towards the development sector Delta offers practitioners tools for both the planning, tracking, and measuring stages of M&E. They specifically highlight NGOs, International Development Agencies, and Universities as some of their key target segments.
  • LogAlto
    In addition to offering a multilingual platform with the basic necessities for M&E (defining indicators, etc.), they offer a mobile application integrated with the platform to facilitate data collection.
  • DevResults
    DevResults in perhaps the robust M&E platform designed based on results framework. The challenge for these kind of platform is long implementation period and cost

What the heck is wrong with Monitoring and Evaluation Software?

Monitoring and Evaluation Framework

All of the above legacy M&E software platform have stated based on results framework.  During the mid-1990s, USAID started a new approach to monitor programs throughout the international development agencies, known as Performance Monitoring Plans (PMP). Central to PMP is the results framework -- planning, communications, and management tool.

The results framework includes

  • Strategic objectives
  • All intermediate results

Limitations of Results Framework

Key Limitations:

  • Cause-and-effect Linkages: The framework describes the ways that program interventions contribute to the results through cause-and-effect linkages. The framework cannot do robust traceability of influence or attributions
  • Not easy to quantify many interventions that require policy changes etc.
  • Framework forces unnecessary fear of not reaching specific outcome causing program managers in limiting or selecting limited outputs


Limitations of Legacy M&E Software

Many M & E Software suffer from

  • Rigid framework does not allow to define appropriate social impact context required for each unique initiative
  • Long and tedious implementation time. Most software implementation can take a long time to implement.
  • Cost is significantly high. See example
  • By the time implementation takes, the requirements may change.  This creates a tremendous change control risk
  • Project management can now be done easily by so many online SAAS based products at the faction of the cost.


Monitoring and Evaluation at International NGOs

Many International NGOs are different from traditional international development agencies.  Unlike development agencies, INGO typically has office headquartered in western countries such as the US, Canada or Europe. They often act a funder to the program organizations in different countries. These county offices often act as decentralized units, who often have different types of processes.  The challenge with that approach is headquartered often needs much faster visibility of data

  • Financial
  • Donor
  • Operational
  • Social Impact

Traditional M & E software does not provide any support similar to the data warehouse for the country level offices. These offices often collect data in the form of paper, offline mobile data collection or local IT systems. However, aligning data requirements between different offices becomes challenging.


Monitoring and Evaluation at International Development Finance

Mid and Large size development finance institutes often are faced with inherent limitations of results framework based design. This approach seems to make the process of impact data aggregation time consuming.


SoPact Impact Cloud® changes game...

Traditional M&E Software and SoPact Impact Cloud® both have similar goals. However their approaches are completely different.

Remember SoPact Impact Cloud  is 

  • A flexible M & E platform that allows to build a flexible impact framework unique to each organizations unique need
    • Traditional M & E on the other hand is is tightly coupled with results framework
  • Start your impact journey with theory of change 
    • Logframe based approach focuses on activity and not outcome  
  • Save time with global impact indicators
    • Traditional M&E software
      • Only custom indicators
      • Limited reporting format
      • Limited qualitative and quantitative alignment
  •  Aligns with sustainable development goals (SDG) and global indicators
    • No concept or scope of SDG alignment
  • SoPact Impact Cloud simplifies top level portfolio data aggregation.  Impact Cloud focuses on an easier alignment with the downstream, decentralized organizations.  Impact Cloud's flexible approach allows rapid and flexible alignment necessary for the decentralized organization structure.
    • Traditional M&E software  is based on rigid reporting framework approach
  •  SoPact Impact Cloud automatically integrates program data.  Its unique data warehouse based approach simplifies data aggregation from different data sources such as paper, online survey, offline survey and custom database
    • Traditional M & E focuses on results aggregation and does not provide any support for data aggregation at the downstream level.  This is a major flaw in their approach as downstream impact organization often do not lean social impact through lean data approach.
  • SoPact Impact Cloud is integrated with powerful state of art qualitative and quantitative analytics


Project Evaluation Tools & Techniques for Data Collection

Quantitative Impact Data Collection Approaches

Every M&E process should have a strong quantitative data foundation. Such data helps set baseline comparisons, gauge program relevancy, determine efficiency (or inefficiencies, etc.). It is usually less cost-intensive to acquire quantitative data compared with qualitative.

Here are some examples of quantitative data tools, approaches, and resources:

  • Survey instruments/questionnaires

    • Using tools like SurveyMonkey or even simpler SMS-based applications can be a relatively quick, cost-effective approach to acquiring beneficiary data.
  • Existing articles, media, etc.

    • It is not always necessary to do the work yourself -- secondary sources such as research articles in magazines, academic papers, books, etc. may already have data points relevant to an organization’s context. Make sure the data is current and aligned with the impact objectives of the program.
  • Similar, partner or competitor organizations’ reports

    • Are there other organizations working on similar issues? Most likely, there are. And it is likely that that they have done some impact reporting, which could include relevant data.
  • Government

    • State-run websites include statistics and other reporting that could provide important contextual data for a program’s impact theme or region. However, just because it is a governmental source, doesn’t mean it automatically is credible data. Organization’s must make their own judgements on the quality of such a source.

Qualitative Impact Data Approaches

To acquire data that is more nuanced, and perhaps with deeper insights, it can be necessary to take a qualitative approach. This tends to be more time- and resource-intensive, as the approaches are usually highly participatory, but it is important to make such an investment in most programs in order to complement the quantitative data that has been acquired.

Here are some examples of qualitative data tools, approaches, and resources:

  • Focus groups/interviews

    • Getting beneficiaries together and having face-to-face conversations (either 1-on-1 or in groups) can be a great way to understand their perception of the impacts generated by the organization’s activities.
  • Field immersion/observations

    • Qualitative observations of beneficiary context can not only help understand the extent of impacts generated, but can also help in uncovering ways to improve program/intervention design to be more context-relevant.
  • Use photo/video

    • Encourage beneficiaries to document their journey (or collaborate with them to do this) using photographic media as the medium.
  • Journaling

    • Encourage beneficiaries to reflect in written form periodically (daily, weekly, etc.) about their experience. Organization’s could also send periodic long-form questions to capture such data.

Read More: One must have tool for Lean Data Social Impact Analysis

Limitations of Current Project Monitoring Information Systems

Monitoring & Evaluation Software Is Not Social Impact Measurement

Monitoring & evaluation systems were designed to manage projects for results management.  However, results management should not be confused in social impact measurement.  

Common challenges of most M&E Suites include:

  • Expensive to acquire and use
  • They take a long time to deploy
  • High execution risk (e.g. because of a lack of organizational skills)
  • Data silo-ing (importing, exporting, and sharing data across different programs can lead to data corruption, data loss, and/or reduction in data reliability).  

Future State of Monitoring & Evaluation Software

Making the Monitoring and Evaluation Successful

Most M&E Systems were designed 10-20 years ago.  Organizations continuing to use these outdated systems will continue to face mounting challenges because of their cumbersome nature and the lack of nimble functionality/adaptability of those systems.

Key limitations include:

Impact Learning

It is one thing to have data to analyze, it is another to know how to analyze it and extract important impact insights from those data. Those insights can be program design-related, outcomes-related, etc. Built into our M&E tools needs to be the functionality to streamline that insight generation.


Data Warehouse & Program Data Aggregation

In the information age we have the capacity to acquire so much more data, more quickly, more often, and from more sources. Managing those data across different software programs and between teams is one of the biggest headaches faced by organizations across sectors.

What is needed are technological solutions that are end-to-end, empowering practitioners to house and manage data all in place (accessible to on the ground workers, and to directors and funders).


Social Impact & Outcome Management

Managing impacts isn't just about determining what was generated by a program, what wasn't, etc. It is about providing the necessary tools (and ensuring proper adoption/implementation)  so that impact strategies can be created, executed, tracked/measured, analyzed, improved, and reported on. Moving towards cloud-based applications that can do all of those things in one place will allow practitioners to manage impact all in one place, improving both efficiency and, ultimately, outcomes as well.


8 Best Practices in Monitoring, Evaluation and Learning




Monitoring and Evaluation Tools - Data Flow

Key Challenges

Distributed, Decentralized and Hierarchical

Many international development organizations employ a distributed, decentralized and hierarchical approach.  This creates many challenges for stakeholders, both internal (teams), and external (partners, funders, etc.).


  • Inconsistent reporting to top
    • Funders, for example, can find it difficult to get a clear, transparent picture of the impacts being generated, how they were generated, and how such insights were acquired. This is made even more difficult if different standards are used at each level of data collection, reporting, etc.
  • Inconsistent data collection processes at different locations
    • If team members are "plugging in" to beneficiaries in different ways (using different tools, or different data management software) this can create huge reliability issues within data sets. 
  • Long reporting cycle
    • Often because of resource limitations, organizations may not be employing M&E processes often enough (e.g. during implementation executing enough monitoring). This could reduce the visibility of impacts generated, while also making it near impossible to adapt in an agile manner to dynamic contextual factors.

Social Impact Evolution Solution

Monitoring and Evaluation Resources

Impact Cloud® for Monitoring and Evaluation

SoPact Monitoring & Evaluation

Experts in Impact at every level of M&E

Sopact works with clients around the world, large and small, providing hands-on guidance and the software tools necessary to make M&E implementation efficient and effective, no matter organizational skill level. 

Here are just a few of the resources you can check out now:


External Monitoring & Evaluation Resources

Explore other resources on the web

Monitoring and evaluation theories, tools, methodologies and more have been well-dissected by practitioners across the globe. Here is a sampling of some of the best resources to get you started:

Get Started with Impact Cloud®
New call-to-action
Jump To Section:
Join Social Impact Network now
Deep Impact Network
Monitoring Evaluation & Learning

Stay current with the latest trends in monitoring & evaluation systems