<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=255921171522003&amp;ev=PageView&amp;noscript=1">

5 Worst Mistakes to Avoid During Monitoring and Evaluation Selection

Posted by Unmesh Sheth on Jun 2, 2019 9:18:08 PM

Monitoring, Evaluation, and Learning has been around for many years. Commonly referred to as ME&L, it is a process to improve performance and achieve results. ME&L's main goal is to ensure that the program is adhering to the theory of change and learning from the evaluation.

Development organizations often use ME&L's approach to collect data, aggregate results, assess the performance of projects, institutions, and programs set up by governments, international development organizations, and International NGOs.

Are you are one of these organizations starting to assess Monitoring, Evaluation & Learning systems? This article will save you tremendous time and resources and recommend what to avoid. 

1. Many Features Increases Implementation Risk

Organizations needing M&E software often start by hiring a third party M&E software evaluator who can help them define the RFP for M&E software selection. First sin, if an organization start this process without key goals or requirements. In the absence of such clarity, the M&E software evaluator (who may or may not have prior experience of implementing M&E software) will start with a commonly known results framework and look for software that meets defined criteria, resulting on a long list of features which may not be core requirements for the organization.

During the mid-1990s, USAID introduced a new approach to monitoring development programs funded by USAID to track results. While the intent was to count activities towards the achievement of results, some of the vendors have taken this intent to an extreme level, creating a giant Enterprise Resource Planning ( ERP) like systems. These bulky systems increase implementation risks as they started to pack many functionalities one by one such as,

  •  Project management
  • Tasks management
  • Complex workflow
  • Complex data management
  • Logframe based reporting
  • Indicator management
  • Results aggregation
  • Results analytics and decision making
  • Reporting

While this looks very comprehensive, the implementation of such a system goes beyond 1+ year with significant investment and a considerable increase in the execution risk.  

In reality, modern cloud-based and software as a service (SAAS) have changed the paradigm in how advanced software should be designed and deployed. Most modern software architecture focuses on building the best of the breed for specific functions. This approach traditionally gives you the best of the class features, faster implementation, and a configuration-based approach that reduces any software implementation risk. Often, online software such as project and tasks management applications at a rather lower price with significantly improved user experience result in a higher adoption within a team.  The only caveat with this approach is to ensure that these cloud applications are easily integrable within a short time.

2. Results Framework = Tight Coupling

The results framework describes the ways that program interventions contribute to the ultimate results through cause-and-effect linkages, i.e., associating lower level results that contribute to output or outcome to the overall goal. However, these frameworks often have not been able to trace factors that can influence the results.

Often desired results are not easy to measure, such as the outcome of the program. Most Monitoring & Evaluation softwares have failed to provide mechanisms to easily aggregate survey data that is applied as baseline, midline, and exit line, and represent the results visually to understand change. While this is not a limitation of the framework, it is a flaw in M&E software.

One major limitation of the framework is that, if program managers are fearful of not reaching the objective, they feel they might get penalized for reporting the negative outcomes.

Modern M&E software like Impact Cloud ® had an opportunity to unlearn these hard lessons.

The new M&E software focuses on a Theory of change approach with a flexible impact indicator selection process to first align the language of outcome and output between key impact ecosystems players. Also, at the project level, new modern and easy to use packaging allows any project organization to start the process in a single day rather than months or even years. Its Lean Data approach allows projects to collect actionable impact data and empower the organizations closer to the stakeholders to learn from data and results in a short period time.

9

3. Data Collection is NOT Monitoring and Evaluation

There are hundreds of mobile offline data collection tool vendors offering their software as monitoring and evaluation systems. While there are some variations between some of them, overall, they do one thing well, which is to collect remote stakeholders often in the remote area where they may not be easily reachable.

 

The Real Challenge in Data Aggregation

While data collection is an important function, in reality, it is only one part of the big monitoring and evaluation puzzle. A complete monitoring and evaluation process requires to:

  • Tie funders' impact framework with other supporting organizations
  • Aggregate results from many online and offline data collection sources, as program organizations often may be decentralized and have different types of data aggregation requirements. For example, in the case of an international eye care initiative, the program managers might need to collect: 
    • Data from eye camps
    • Individual door to door survey
    • Hospital results from healthcare management systems 

So, the real challenge is to realize that most organizations collect data from many different data sources, both offline (paper or mobile) and online (survey or system based data collection). The question is how to aggregate results from different sources on a regular basis. These organizations need a flexible architecture that allows various offices to use simplified and adaptable data warehouse that can aggregate results and deliver outcomes in a near real-time.

 

 

Data Sources-01

Unfortunately, both data collection tools and most Monitoring & Evaluation systems are designed based on limited or rigid data collection approach, and do not solve any data aggregation challenges.

The modern M&E platforms instead, empower downstream organizations with a better and easy to use data aggregation system so that they can find faster outcome results with a Lean Data approach.  These systems would make impact data measurement simple with following approach:

  • Theory of Change-driven
  • Lean Data Measurement
  • Data Aggregation and Data Warehouse
  • Outcome Management
  • Lean Data Analytics
  • Impact Scorecard 
  • Social Return on Investment

4.  Monitoring & Evaluation is NOT about Activity Reporting

Traditional M&E systems excessively focus on activity reporting. This particular goal has kept all significant development progress from real use of monitoring & evaluation:

  • Empower stakeholders to demonstrate their voice so that the program, service, and provider have a short feedback loop.
  • Reduce the distance between funders, top-level, and downstream organizations. Even if you implement the best M&E software, at best they are good at collect project-level reporting. In reality, most organizations are distributed and operationally decentralized. This creates significant challenges to aggregate hierarchical results roll up.

Impact Maker and Manager-01

Figure: Reduce distance between funder and stakeholder

The best architecture would provide relative autonomous and flexible data aggregation at the lower level (asset level) and provide the flexibility of rolling up results to the upper level. This new modern approach allows the top-level to get better visibility of the outcomes of their funding.

5. Customization-based Systems Will Increase Cost & Risk

Many M & E software started around 2000 or 2010. Many of them began with a J2EE or Client/Server architecture and later were rebranded as Cloud-based software.  Often, these platforms take a lot longer than most realize, increasing failure risk or longer deployment time or cost run-up.  For that matter, Salesforce, which is the most popular cloud-based platform, can end up taking very long. We know of many organizations starting at this path taking over a year.  Even if they managed to go live, our review found that their platform was giving them limited analytics and reporting.

 

config-custom

Are you familiar with QuickBooks? A popular US-based accounting application. This is how most configuration based application look like! They are easy to use, flexible, and yet comprehensive.

Just as QuickBooks, new M&E systems should allow organizations to log in and let the online application guide them on the best way to implement their impact strategy, metrics, and survey, and deploy in a short time. This approach puts program management or evaluation management staff in the driving seat, allowing them to configure changes over a long time adapting to changing metrics or reporting requirements.

 

I hope you enjoyed this thought provoking article.

What do you think about an article?

  • The premise of the article
  • Facts
  • Content
  • Readability

We appreciate any feedback to so that we can serve higher quality and thought-provoking article. Our goal is to build a robust impact practitioners ecosystem so that together we move to solve some the most pressing issues of our time. Please submit your comments below!

Learn More:

 

Topics: monitoring & evaluation

Unmesh Sheth

Written by Unmesh Sheth

Unmesh is the founder of the SoPact. SoPact is a personal vision that grew from 30 years of experience in technology, management, and the social sector.