Impact evaluation provides a deeper understanding of long-term outcomes based on carefully designed scoring, outcome tracking, or stakeholder feedback.
As the world looks toward economic recovery, what are the best ways to use impact evaluation to improve underserved populations’ economic mobility? Businesses, investors, and new frontiers of philanthropy are looking to develop communities’ capacity to eliminate disparities. What role does impact evaluation play in this process? How can an impact evaluation approach to building civic infrastructure help deliver better results? This actionable impact evaluation design will help capacity building of programs by project planners reviewing different evaluative criteria. Learn how impact evaluations can drive better decisions and where impact evaluation fails.
IMPACT EVALUATION GOALS
Impact evaluation for each social impact player reflects its unique impact goals based on their roles. It is easier to start with the role, Impact maturity, and “Impact Goal” to understand better.
- 01 Asset Manager
- 02 International Development Organizations
- 03 Capacity Building Organizations in Developed Countries
Improve Portfolio Level Results Aggregation and Individual Reporting
Often asset managers or funders are interested in improving efficiency in collecting results from their enterprise, assets, partners, or grantees. These are high-level, aggregate metrics collected from the portfolio companies, grantees, or partners on a regular interval based on agreed-upon metrics. These metrics are defined in the Theory of Change and collected from investments (investee) or grantees. These metrics, summaries, or results data aggregate each organization's results (i.e., from the individual stakeholder data). Some examples of this data include environmental metrics, financial performance, and more.
While most funders use a survey-based approach, collecting results gets challenging as aligning the impact framework is challenging. Often it is challenging to persuade portfolio organizations to provide useful and accurate reporting as many of the enterprises themselves do not have well define impact maturity (i.e., lack of robust theory of change, data collection, outcome measurement, and reporting practices)
- Funders and partners must align the impact framework through the mutual theory of change, and impact metrics can be challenging.
- Must agree on impact metrics, reporting cycle, method of reporting, and consistent meaning of metrics
- Reduce report collection burden from the portfolio of partners
- Manage historical results in a single database
- Improve aggregated reporting results
- Often funders cherry-pick metrics, reducing any long-term value of aggregate reporting.
- Even with the best aggregation, reporting should NOT be equated to impact as this doesn't provide any evidence of impact from the enterprise.
- We highly recommend that investor-centric reporting is not sufficient. Impact intentioned investors should emphasize portfolio impact capacity; instead, as described in the next section.
- Third-party impact verification is key, but funders must assist in building capacity.
So what is impact management capacity building?
Enterprises focusing on better impact management, such as the robust theory of change, impact metrics, data collection practices, and reporting, are likely to improve community outcomes in the long run. Funders have the potential to see more efficiency and effectiveness through better impact management at the portfolio level. Funders would often focus on limiting portfolios and focus on improving capacity through a more hands-on approach. They would work with each of the enterprises to build a complete impact measurement lifecycle.
- Improve early impact management
- Working with a more robust enterprise raises impact capital.
- Often most enterprises collect sales and services data but limited focus on impact data.
- Collecting impact data can be challenging without innovative approaches.
- Without the support of a matured impact practitioner, enterprises may struggle to identify outcomes.
- Asset Managers must focus on building enterprise (asset) impact or the enterprise's maturity. Unless enterprises have a higher level of impact maturity, it isn't easy to improve a program's stakeholder impact or investments.
- They can step by step build impact management capacity by identifying correct challenges. Results would be automatically aggregated at the portfolio when enterprises start reporting data as a big bonus with Impact Cloud™!
IMPACT GOALS TO IMPACT EVALUATION STRATEGY
An organization must identify authentic impact goals as they are the foundation of appropriate impact evaluation strategy
ENTERPRISE IMPACT EVALUATION
- Improve Donor Reporting
- Outcome Reporting
- Effective product or services
- Stakeholder Engagement
- Identify and mitigate the negative impact.
- Raise Impact Capital
Individual stakeholder data is the type of data that is collected directly from the end beneficiaries themselves. After an organization provides a service or product to these beneficiaries, stakeholder data measures how they benefited from these actions. There are many sources of this data ranging from employees speaking about the workplace, volunteers, supply chain with vendor policies, etc.
- Activity or Output Data: While activity and output data can be collected in many of the external systems, including Impact Cloud, all data can be brought to Impact Cloud or an equivalent data warehouse for consistent reporting
- While many organizations fall into a category, activity or output based reporting only gives a limited understanding of impact. Well-intentioned has more opportunity to do good by building impact management capacity to their investments.
Mission-driven funding must achieve identity program goals and align with impact goals. Depending on your program and impact goals, there are many outcome tracking approaches possible. We have documented some in the subsequent section.
The outcome-based approach requires a well-defined impact evaluation plan. However, emphasizing this approach can be the best way to improve.
- Develop stakeholder insight
- Improve Stakeholder engagement
- Monitoring program progress
- Improve long term outcomes
Every enterprise is a social enterprise. Let's say you are launching a fitness gym in a low-income community. What are financial and social outcomes?
How can you -
- Access market opportunities
- Design effective product or services
A well-designed stakeholder survey aligned to five dimensions of impact from the Impact Management Project can give you an insight before accessing market opportunities and allow you to design the right product or services.
Learning from stakeholder provide a key insight customer satisfaction, product satisfaction, and other key sentiments. Businesses have been using tools such as "Net Promoter Score" successfully for years to improve such. For mission-driven work, simple net promoters are difficult. However, faster feedback and shorter survey can give a better understanding of stakeholder engagements.
Impact evaluation should not only be about demonstrating positive impact. A well-designed process can identify the holistic impact, including negative impact. More importantly, this can help an enterprise understand process improvements, set the targets for improvement, and measure progress over time.
Raising impact capital does not have to be all or nothing. Are you an entrepreneur looking to raise impact capital? You need
- Well designed impact strategy
- Plan on action to demonstrate gradual impact management ongoing basis
Useful when you want to raise impact capital based on expert impact advisory from an experienced team. Consider when an organization does not have an impact management team and designers onboard.
IMPACT DATA COLLECTION
Collecting impact data can be quite challenging. You could be collecting data in external systems or through stakeholder surveys.
External systems can often be.
- Excel/Google Spreadsheets
- An external system such as Salesforce
- Case Management System
Often these data are activity or output data. Understanding social impact requires a better impact evaluation strategy. Depending on program goals and stakeholder accessibility, you will develop an appropriate impact program evaluation strategy. The subsequent section provides different types of impact evaluation strategies.
Data collection can be the single most challenging, complex, and costly which can often create resistance in management. Experience impact management team can help design a smart data collection with different goals such as -
- Time to collect data
- Accuracy of data
- Frequency of data
- Social and environmental goals
Organizations must develop a better and faster remote data collection process. Impact Management platform like Impact Cloud provides smart data collection and management approaches.
SIMPLIFY REMOTE DATA COLLECTION
COVID19 has created data collection more difficult than ever. Collecting household data have been challenging. Innovative impact measurement platforms like Impact Cloud ™ can improve efficiency with data collection by making it easier to collect data and aggregate data from any source.
The following options are in the order of ease of data collection and cost to collect data.
- Do you have an email? This would be the most preferred approach.
- Do you have a phone number?
TEXT/SMS may be most preferred for a developed country, whereas creating a mobile-friendly survey by sharing on WhatsApp or Facebook can reduce the cost of data collection.
Link to a mobile-friendly survey
- The beneficiary is not easily accessible? This is often the case in developing countries where it may be difficult to reach stakeholders; the field manager can call up a local village elder who may use a phone and take a phone interview.
- Remote Phone Survey: Offline data tools like SurveCTO allow transcribing phone responses to digital automatically or call centers.
- Door-to-door survey: No other choice? When selling products or services to stakeholders or door-to-door surveys as a last resort on offline mobile data collection, collect data.
What is the future of the Remote Data Collection in Monitoring, Evaluation, and Learning? How will Social Impact Measurement focus on real-time impact learning? Learn how Post-COVID will shift with new techniques.
Monitoring and Evaluation Plan is a combination of data collection and analysis and assessing to what extent a program or intervention has, or has not, met its objectives.
PROGRAM AND IMPACT DATA AGGREGATION
As an organization grows, its programmatic data are sitting on islands of data sources. Data collected from survey tools can become excel hell! As a result, reporting and learning from precious data becomes worthless over time. A data warehouse is like a swiss army like a solution that can allow you to collect data from any sources, roll up data whenever the field manager collects data or event completed and demonstrates results in a real-time dashboard.
While we believe every organization is different, following the most common requirements can often help scale impact data management.
Data Management Requirements
- Financial / Loan data
- Beneficiary demographics
- Volunteer data
- Beneficiary surveys (Longitudinal Data Analysis)
- Beneficiary Feedbacks (short and specific)
- Operational data examples: Water, Waste, Renewable, Energy, etc
- Program data examples: Training, Enrollment, Job creation
- Other Please, specify
Offline Tools Examples
- Other Please, specify
- We don't use any offline tools yet, but we need one.
- Online Tools
- Survey Monkey
- Google forms or XLS
- Other Please, specify
- We don't use any online tools yet, but we need one.
What would be the best way to reach out to your beneficiaries for data collection?
- Email (Email addresses needed)
- SMS (Mobile phone numbers needed)
- Social media No email, no mobile phone required.
- Link to a mobile-friendly survey
- Door to door
- Telephonic interview
- You may consider working with vendor design appropriate "integration."
- Smart Import process in Data warehouse will work.
Data Scheduling, Tracking, and Reminders
- Program and stakeholder survey
- Survey template (introduction to survey to stakeholder)
- Date, Time Tracking
- Tracking completion
- Reminder to stakeholder
- Measure progress over the period of time
- Weekly, Monthly, Quarterly, Annually
- Compare Baseline, Midline, and Exit
- Compare results versus targets.
- Compare results by location Countries, States, Regions, Branches.
- Compare results by program, project, or investment.
- Aggregate results across multiple locations
- Aggregate results across multiple programs, projects, or investments
- Assign scores based on the beneficiary responses on the survey, for example, on a Likert scale.
- Perform additional calculations with the raw data, For example, survey date - date of birth to get the beneficiary's age.
- For example, perform calculations between metrics, For example, total students enrolled - total students who graduated to get the number of students that dropped out or failed.
- Analyze results by cohort
- Analyze results by individual stakeholders
Data Governance Requirements
Many organizations with hierarchical or hub & spoke reporting must consider local and global governance of data such as
- Unique program-specific requirements to meet different funders
- Privacy of data between different team
- Aggregation of reporting at a higher level through auto-roll up
- Permission for third party verification or auditing
Evaluating programs like poverty alleviation, hunger, and job creation can be complicated. Many of the programs funded by the world bank of related agencies may require rigorous research design. These programs usually involve an ecosystem of players and multiple entrepreneurs or nonprofits working in their communities.
In previous videos, we highlighted the importance of defining the proper outcomes as part of your Theory of Change or your Five Dimensions of Impact. If you directly work with the target communities, you design a survey to collect data and track progress.
But how do you know if your survey is collecting data relevant to your outcomes? How can you tell if your intervention has positive, negative, or neutral effects?
- Better survey design
- Applying learnings to other geographies
- Involving observation to collect data.
- Scoring survey responses
- Correlating results to find causality
- Outcome stars
- Investor's contribution - Collective Impact
Designing the right survey design
As mentioned before, your data will help you track positive and negative outcomes only if your survey is designed correctly.
Many impact consultants can help you design a good survey appropriate to your context and relevant to your outcomes. For example, if your organization has an intervention to reduce poverty through job creation, your survey should collect enough information about how the new job has improved the household’s conditions. It’s not enough to learn that the household income is improved; you need to understand what is being used for Education? Healthcare? Food?
In any case, we recommend starting by collecting data in one country only. This will help you analyze what results you are getting, what questions or options within your question might not be relevant to your target population, etc. Then, if you apply those learnings to your country or region, you are more likely to have relevant outcome data.
Applying observation to collect data. Observe more and ask less!
Scoring survey responses.
Once you start collecting relevant data, you will observe how the outcome results change from one collection period to the other. For example, you will see that last year, the household sent 2 kids to school, but this year they sent 4.
To link the results back to the outcome metrics, you might need to apply a scoring mechanism to the survey answers. For example, if the respondents are sending 2 kids to school, they get a score of 1 point, but if they send 4 kids to school, they get a score of 2 points.
This allows you to:
- Have an easier way to track if the trends are getting better or not overtime
- Aggregate the results across multiple communities or countries
- Compare similar communities
- Report your results in an easier way for stakeholders that are not familiar with your full process.
Even if you decide to aggregate your data for reporting purposes, make sure you are also analyzing the survey results using correlation. For example, if one of your hypotheses is that increasing the income will also increase the consumption of more nutritious food, those two variables together. Are they really correlated? If not, it’s time to go back to your communities and understand what’s preventing them from consuming nutritious food. Maybe there’s a lack of access to such food. How can your organization resolve this issue? One of the ways is to work with a partner to make nutritious food available in your communities.
Outcomes Stars are evidence-based tools designed to support positive change and greater wellbeing, with scales presented in a star shape and measured on a clearly defined ‘Journey of Change.’ The Outcomes Star is completed as part of conversations between individuals and support practitioners such as key workers. Many outcome stars are defined, such as areas Adult Care, Community, Criminal Justice, Domestic Violence. Each set or star consists of a series of questions that can eventually be ranked from one for low to five for high. Based on the response, individuals or cohorts can be tracked over time to see overall progress and design appropriate interventions.
Traditionally, investors and accelerators have limited their involvement with the portfolio companies to simply requesting results data. But increasingly, we see a shift to a deeper level of involvement, from providing the monetary or technology resources to manage the companies’ impact to providing advisory in the definition of outcomes, metrics, and surveys.
Here at SoPact, we understand that organizations like yours face challenges with impact measurement and management daily. So we’ve developed a platform that allows you to collect and manage any stakeholder data and link it back to your outcome metrics to demonstrate progress over time.
Watch video: Outcome Tracking
Longitudinal Data Analysis
Longitudinal data analysis is an important impact evaluation tool designed to demonstrate outcome progress over a period of time. A longitudinal analysis refers to an investigation where stakeholder outcomes and possibly intervention or exposures are collected at multiple follow-up times. A longitudinal study generally yields multiple or “repeated” measurements on each subject.
When working on mental health, job creation, and youth empowerment, it is necessary to see the progress an individual or community is making over time. Outcome Star is one such system designed based on well-accepted evaluation surveys. The organization can measure the progress of a stakeholder over a period of time.
Managing longitudinal surveys and analyzing can soon become complex as the number of programs and stakeholder grows. Smart data collection is required to reduce the data collection burden and analyze real-time results for better stakeholder listening. For example
- Do not repeat demographics or data that do not change.
- Use the same survey with different phases to see continuous improvement.
- Focus on demographics data in the baseline
- Uniquely identify stakeholder
- Define control groups
- Reduce sample biases
Impact Evaluation Plan
Impact evaluation is the process where we study the results of social interventions. As a result, we can determine the overall initiative's value and use the findings as learnings for future endeavors. This is critical for investors to understand the rationale behind continuing the program and for impact makers to see if their efforts are bringing positive change. This article underlines the steps you must take to define an effective impact evaluation plan.
1. Establish Program Theory of the Social Impact Initiative
Program theory is the step where you document the assumptions and logical arguments that define your program's rationale. If you would like to take a more sophisticated approach, we recommend going a level higher and doing as below:
The theory of change is the program theory, where you describe the assumptions and logical arguments in favor of your initiative and lay down all the possible scenarios that can result from your impact actions. This gives a clearer picture and helps you understand the evaluation reports from a broader perspective.
Here is all you need to know to define your program's theory of change.
Counterfactual analysis is essentially a “with versus without” analysis. We study the program's impact by comparing the results from a control group where no artificial factors were stimulated. You need to set up this control group early so the evaluation can be done parallelly and the results are comparable. In some cases, this can be avoided where no other factor can bring any observed change in outcomes (e.g., reductions in time spent fetching water after installing water pumps).
2. Addressing Selection Bias in Impact Evaluation
The presence of selection bias can skew the results of the evaluation. Hence, we must take whatever steps possible to eliminate all bias. Below are the preconditions to identify the extent of selection bias and the remedial action that can be taken to eradicate it:
- If the evaluation metrics are determined before the event, we need to see if randomization is possible. If the treatment group is chosen randomly, another set of random counterfactuals is a valid test. It is possible to target a subgroup of the random subject and remain unbiased. For example, if an initiative was designed for below-minimum-wage workers, then the counterfactual group can also be from the same subset to maintain relevance.
- If the above is false, we see whether the selection determinants are observed. Several regression techniques can help eliminate bias in this case.
- In the case of unobserved selection determinants, we need them to be time-invariant so that panel data can be used to remove bias. For this case, the baseline (or some means of substituting baseline) is critical.
- In this case, the panel is not possible. Since the selection determinants are unobserved, we need to identify ways of observing the determinants.
- If that fails, we can go for the pipeline approach, provided some untreated beneficiaries exist.
We can only address the problem of selection bias if all of the above are possible. Thus, we are left to rely on program theory and triangulation to build an argument. Thus, setting up a Theory of Change (TOC) Model helps make plausible associations easy.
3. Designing the Stakeholder Survey
A baseline survey is performed when the project is initiated, i.e., at the beginning but after implementation. It helps prioritize between different objectives of an initiative and works as a benchmark to identify the success or failure.
- It should align with the program theory and data collected across the results chain, not just on outcomes.
- The counterfactual should be presented in the same questionnaire. Intervention-specific questions should be replaced with similar questions of a more general nature that can help test for any initiative influence.
- Allocate enough time to double-check on instruments before initiating the survey. It should be convenient to form a relational database to ease analysis and data entry. This process can easily take 4-6 months.
- Include PII in the survey so you can refer to the same respondent for later rounds.
- Refrain from changes in survey design mid-way through the process, as this can result in inconsistent results.
If you are reading an article on the Impact Evaluation procedure, the chances are that you are at the end of the program phase. Thus, if the baseline survey is missed initially, you cannot go back to collect that data. However, here are a few things we can do:
- I'd like you to find another dataset to serve as a baseline; this can be secondary data collected by a different agency on similar parameters.
- If no such study can be referred, you can use publicly available national survey data and create a counterfactual group using propensity score matching. If you evaluate a federal or sector-wide intervention, this is an entirely reasonable approach.
- A survey can be performed by asking respondents of an interest group to recall the variables in focus. It is practical if we expect a significant life change resulting from the initiative. For example, farmers are expected to remember what it was like in the absence of irrigation five years back. If you choose to do this, make a note of below:
- People often consider past events more recent than they were. You can use time and a historical benchmark to avoid this psychological conundrum.
- Don’t expect your respondents to remember exact figures like dates, times, prices, etc. Instead, give range values to keep them comfortable.
- You can go back to your Theory of Change model and analyze if there are events that could have resulted in the outcome apart from your initiative and if the cause-effect relations established in the beginning were, in fact, actual.
Read More: How to use Impact Reporting for Storytelling Impact Learnings.
Triangulation means using different types of samples and methods of data collection. Thus, we can ensure the validity of results by comparison. This step becomes necessary when we cannot eliminate selection bias or establish an accurate baseline. Triangulation helps build confidence in findings and fills in gaps in statistical studies. Could you be sure to allocate part of your budget and time to this step?
- The best software to use for Monitoring and Evaluation and Impact Learning
- Quasi-experimental approach
- Control groups
5. Qualitative vs. Quantitative analysis
The purpose of the evaluation is not just to measure but evaluate results. In this step, we need to weigh in the qualitative data alongside the numbers gathered to study the impact of actions. The feedback from field data collectors comes in handy here. They can give perspective on the situation, status, people's lives, and the authenticity of the experiment. The second type of qualitative data would be the inferences drawn and cause-effect relationships established at the beginning of the investigation. Do they still go in line with the numbers? If not, where did you miss it?
These are some steps you can take to conduct an impact evaluation effectively. Check out our complete guide to Actionable Impact Management to further understand the process of social impact monitoring, evaluation, and assessment.