Why is impact evaluation important




















Impacts are usually understood to occur later than, and as a result of, intermediate outcomes. For example, achieving the intermediate outcomes of improved access to land and increased levels of participation in community decision-making might occur before, and contribute to, the intended final impact of improved health and well-being for women. The distinction between outcomes and impacts can be relative, and depends on the stated objectives of an intervention.

It should also be noted that some impacts may be emergent, and thus, cannot be predicted. Evaluation, by definition, answers evaluative questions, that is, questions about quality and value. This is what makes evaluation so much more useful and relevant than the mere measurement of indicators or summaries of observations and stories. One way of doing so is to use a specific rubric that defines different levels of performance or standards for each evaluative criterion, deciding what evidence will be gathered and how it will be synthesized to reach defensible conclusions about the worth of the intervention.

At the very least, it should be clear what trade-offs would be appropriate in balancing multiple impacts or distributional effects. Since development interventions often have multiple impacts, which are distributed unevenly, this is an essential element of an impact evaluation. For example, should an economic development programme be considered a success if it produces increases in household income but also produces hazardous environment impacts?

Should it be considered a success if the average household income increases but the income of the poorest households is reduced? Quality refers to how good something is; value refers to how good it is in terms of the specific situation, in particular taking into account the resources used to produce it and the needs it was supposed to address.

Evaluative reasoning is required to synthesize these elements to formulate defensible i. Evaluative reasoning is a requirement of all evaluations, irrespective of the methods or evaluation approach used. An evaluation should have a limited set of high-level questions which are about performance overall. Each of these KEQs should be further unpacked by asking more detailed questions about performance on specific dimensions of merit and sometimes even lower-level questions.

Evaluative reasoning is the process of synthesizing the answers to lower- and mid-level questions into defensible judgements that directly answer the high-level questions. Evaluations produce stronger and more useful findings if they not only investigate the links between activities and impacts but also investigate links along the causal chain between activities, outputs, intermediate outcomes and impacts.

A theory of change should be used in some form in every impact evaluation. It can be used with any research design that aims to infer causality, it can use a range of qualitative and quantitative data, and provide support for triangulating the data arising from a mixed methods impact evaluation. When planning an impact evaluation and developing the terms of reference, any existing theory of change for the programme or policy should be reviewed for appropriateness, comprehensiveness and accuracy, and revised as necessary.

It should continue to be revised over the course of the evaluation should either the intervention itself or the understanding of how it works — or is intended to work — change.

Some interventions cannot be fully planned in advance, however — for example, programmes in settings where implementation has to respond to emerging barriers and opportunities such as to support the development of legislation in a volatile political environment. In such cases, different strategies will be needed to develop and use a theory of change for impact evaluation Funnell and Rogers For some interventions, it may be possible to document the emerging theory of change as different strategies are trialled and adapted or replaced.

In other cases, there may be a high-level theory of how change will come about e. Elsewhere, its fundamental basis may revolve around adaptive learning, in which case the theory of change should focus on articulating how the various actors gather and use information together to make ongoing improvements and adaptations.

The evaluation may confirm the theory of change or it may suggest refinements based on the analysis of evidence. An impact evaluation can check for success along the causal chain and, if necessary, examine alternative causal paths.

For example, failure to achieve intermediate results might indicate implementation failure; failure to achieve the final intended impacts might be due to theory failure rather than implementation failure. This has important implications for the recommendations that come out of an evaluation. In cases of implementation failure, it is reasonable to recommend actions to improve the quality of implementation; in cases of theory failure, it is necessary to rethink the whole strategy for achieving impacts.

The evaluation methodology sets out how the key evaluation questions KEQs will be answered. It specifies designs for causal attribution, including whether and how comparison groups will be constructed, and methods for data collection and analysis.

This definition does not require that changes are produced solely or wholly by the programme or policy under investigation UNEG Using a combination of these strategies can usually help to increase the strength of the conclusions that are drawn. Some individuals and organisations use a narrower definition of impact evaluation, and only include evaluations containing a counterfactual of some kind.

These different definitions are important when deciding what methods or research designs will be considered credible by the intended user of the evaluation or by partners or funders. For more information, see:. Well-chosen and well-implemented methods for data collection and analysis are essential for all types of evaluations.

Impact evaluations need to go beyond assessing the size of the effects i. The framework includes how data analysis will address assumptions made in the programme theory of change about how the programme was thought to produce the intended results. In a true mixed methods evaluation, this includes using appropriate numerical and textual analysis methods and triangulating multiple data sources and perspectives in order to maximize the credibility of the evaluation findings.

Start the data collection planning by reviewing to what extent existing data can be used. After reviewing currently available information, it is helpful to create an evaluation matrix see below showing which data collection and analysis methods will be used to answer each KEQ and then identify and prioritize data gaps that need to be addressed by collecting new data. This will help to confirm that the planned data collection and collation of existing data will cover all of the KEQs, determine if there is sufficient triangulation between different data sources and help with the design of data collection tools such as questionnaires, interview questions, data extraction tools for document review and observation tools to ensure that they gather the necessary information.

Evaluation matrix: Matching data collection to key evaluation questions. Examples of key evaluation questions KEQs. Programme participant survey. Key informant interviews. Observation of programme implementation. There are many different methods for collecting data.

A key reason for mixing methods is that it helps to overcome the weaknesses inherent in each method when used alone.

It also increases the credibility of evaluation findings when information from different data sources converges i. Good data management includes developing effective processes for: consistently collecting and recording data, storing data securely, cleaning data, transferring data e. The particular analytic framework and the choice of specific data analysis methods will depend on the purpose of the impact evaluation and the type of KEQs that are intrinsically linked to this.

For answering descriptive KEQs, a range of analysis options is available, which can largely be grouped into two key categories: options for quantitative data numbers and options for qualitative data e.

For answering causal KEQs, there are essentially three broad approaches to causal attribution analysis: 1 counterfactual approaches; 2 consistency of evidence with causal relationship; and 3 ruling out alternatives see above. Ideally, a combination of these approaches is used to establish causality.

For answering evaluative KEQs, specific evaluative rubrics linked to the evaluative criteria employed such as the OECD-DAC criteria should be applied in order to synthesize the evidence and make judgements about the worth of the intervention see above.

The evaluation report should be structured in a manner that reflects the purpose and KEQs of the evaluation. In the first instance, evidence to answer the detailed questions linked to the OECD-DAC criteria of relevance, effectiveness, efficiency, impact and sustainability, and considerations of equity, gender equality and human rights should be presented succinctly but with sufficient detail to substantiate the conclusions and recommendations.

Evidence on multiple dimensions should subsequently be synthesized to generate answers to the high-level evaluative questions. The structure of an evaluation report can do a great deal to encourage the succinct reporting of direct answers to evaluative questions, backed up by enough detail about the evaluative reasoning and methodology to allow the reader to follow the logic and clearly see the evidence base. The following recommendations will help to set clear expectations for evaluation reports that are strong on evaluative reasoning:.

The executive summary must contain direct and explicitly evaluative answers to the KEQs used to guide the whole evaluation. Explicitly evaluative language must be used when presenting findings rather than value-neutral language that merely describes findings. Examples should be provided. Structuring of the findings section using KEQs as subheadings rather than types and sources of evidence, as is frequently done.

There must be clarity and transparency about the evaluative reasoning used, with the explanations clearly understandable to both non-evaluators and readers without deep content expertise in the subject matter. These explanations should be broad and brief in the main body of the report, with more detail available in annexes.

If evaluative rubrics are relatively small in size, these should be included in the main body of the report. If they are large, a brief summary of at least one or two should be included in the main body of the report, with all rubrics included in full in an annex.

Overview briefs 1,6,10 are available in English, French and Spanish and supported by whiteboard animation videos in three languages; Brief 7 RCTs also includes a video. The webinars were based on the Impact Evaluation Series — a user-friendly package of 13 methodological briefs and four animated videos — and presented by the briefs' authors.

Each page provides links not only to the eight webinars, but also to the practical questions and their answers which followed each webinar presentation. Impact Evaluation for Development: Principles for Action - This paper discusses strategies to manage and undertake development evaluation. Rogers P Introduction to Impact Evaluation. World Bank Impact Evaluation in Practice. Kirsten Bording Collins is an experienced evaluation specialist providing consulting services in program evaluation, planning and project management.

She has over ten years of combined experience in the nonprofit, NGO and public sectors working both in the U. Kirsten's areas of expertise include: program evaluation, planning, project management, evaluation training and capacity-building, mixed-methods, qualitative analysis, and survey design. Learn more about how to file a complaint about a distance program or courses.

Skip to main content. What is Impact Evaluation? Purpose of Impact Evaluation Impact evaluations often serve an accountability purpose to determine if and how well a program worked. Development programmes are fundamentally about improving outcomes: boosting incomes, increasing productivity, encouraging learning, improving health and protecting the environment, to name a few.

Understanding whether or not an intervention accomplishes its objectives — and why or why not — is crucial for accountability, informed decision-making and the efficient use of resources. Impact evaluations — rigorous studies that measure the effects of international development programmes — are at the heart of 3ie's work. Our holistic approach to designing and conducting impact evaluations provides a rich set of information for decision-makers.

In addition to establishing rigorous quantitative estimates of a programme's effect, we believe the most useful impact evaluations seek to understand why a program worked and at what cost. Our focus on high-quality impact evaluations also drives our efforts to strengthen evaluation capacity ; ensure research is transparent, ethical, and replicable ; use innovative data sources ; and promote evidence uptake and use.

What is an impact evaluation? Impact evaluations are designed to answer the question: "What was the effect of an intervention on an outcome? Consider a training programme for female entrepreneurs, like this one in Kenya we helped evaluate. Many factors affect business performance from month to month, from seasonal variations to technological change to global health pandemics.

So how can we separate the effect of the programme from all those other factors? Impact evaluations provide a toolkit of methods to measure the effects of that programme, and that programme alone. In the Kenya example, the research team used a randomized control trial, a method similar to the studies doctors use to test the effectiveness of new medicines. By randomly assigning some individuals to participate in a given programme while others are not, we can compare the outcomes across the two groups to see if the programme works.

Because random assignment produces two groups that are similar except for the presence of the programme, we rule out other factors that might otherwise account for the differences between the two groups, leading to a causal interpretation of impact. However, not everything can or should be randomized. Evaluations of other interventions like food aid to conflict-affected families in Mali or an environmental programme in Mexico rely on other tools: quasi-experimental methods.

With careful research designs, advanced statistical techniques, and innovative approaches to data collection , we can identify an appropriate comparison group and measure an intervention's effect. Process evaluations and implementation research Our experience has shown that impact evaluation findings are most informative when they stand alongside additional data developed through process evaluations, formative evaluations, and other implementation research.

Using primarily qualitative approaches, these evaluations provide the rich contextual data to colour in the backdrop around impact evaluation findings. Examples of our work combining impact and process evaluations include our research on India's National Rural Livelihoods Mission and our evidence programme on agricultural insurance.

Cost analysis Including an analysis of a programme's costs in an impact evaluation provides another essential piece of information policymakers need. Unfortunately, too many impact evaluations omit this essential component.

Incorporating costs with rigorous estimates of impact allows for the comparison of two or more interventions with cost-effectiveness analysis, or for the comparison of the value of benefits generated by an intervention relative to its costs, using a cost-benefit analysis. Both cost-benefit and cost-effectiveness analyses provide important insights for deciding whether to invest in a programme and what approach is more cost-effective.

Quality assurance In addition to conducting impact evaluations, we also offer quality assurance services which ensure that impact evaluations conducted by other organizations meet the highest standards. Our approach to quality assurance verifies that an impact evaluation is designed appropriately to identify a programme's effect and data collection strategies are suitable for the research context.



0コメント

  • 1000 / 1000