- Introduction
- Evaluation in CDCS
- Evaluation in PMPs
- Evaluation in Project Design
- Evaluation in Activity MEL Plans
- Evaluation in the Budget Cycle
- Evaluation in Portfolio Reviews
- Introduction
- Deciding to Evaluate
- Engaging with Stakeholders
- Determining Evaluation Purpose and Evaluation Questions
- Developing an Evaluation SOW
- Developing an Evaluation IGCE
- Commissioning an Evaluation
Overview
Welcome to the e-version of the USAID Evaluation Toolkit!
The Evaluation Toolkit:
- Curates the latest USAID guidance, tools, and templates for initiating, planning, managing, and learning from evaluations primarily for USAID staff members who are involved in any phase of the evaluation process.
- Is a resource for USAID staff members and external contractors who participate in or conduct evaluations for USAID.
How to Use this E-Toolkit:
The Toolkit is organized according to the USAID Program Cycle and the phases of an evaluation.
Section 1: Evaluation at USAID (overview of evaluation and the policy context for evaluation at USAID)
Section 2: Evaluation Throughout the Program Cycle (when it is required or encouraged to plan, use, or report on evaluations)
Sections 3 through 5: Phases of an Individual Evaluation
- Section 3: Planning (from deciding to evaluate to procuring an evaluation)
- Section 4: Managing an Evaluation
- Section 5: Sharing, Reporting, Using, and Learning from an Evaluation
Section Organization
- Brief narrative introduces the general requirements and important considerations.
- Sub-thematic areas are listed on the left-hand side and go more in-depth into specific areas or processes.
- Core resources (1) provide further guidance on specific requirements and processes; (2) describe best practices; and (3) offer templates and other tools.
- Additional links that provide easy access to USAID reference documents, reports, and webinars that go into specific evaluation issues in greater depth, and non-USAID resources that may be useful during the evaluation life cycle.
A few resources and additional links are available only to USAID staff. These are indicated by a designation of USAID only.
Acknowledgements
This Toolkit was developed by the Bureau for Policy, Planning, and Learning Office of Learning, Evaluation, and Research (PPL/LER). Many USAID and USAID contractor staff—from the field and Washington—provided content, comments, feedback, and insights into this Evaluation Toolkit. Their contributions have been and continue to be essential to the ongoing development of this Toolkit.
1. Evaluation Policy at USAID
Evaluation at USAID is defined as the systematic collection and analysis of data and information about the characteristics and outcomes of one or more organizations, policies, programs, strategies, projects, and/or activities as a basis for judgments to understand and improve effectiveness and efficiency, timed to inform decisions about current and future programming. Evaluation is distinct from assessment (which is forward-looking) or an informal review of projects (ADS 201). It is also distinct from performance monitoring, which is an ongoing and systematic collection of performance indicator data and other quantitative or qualitative information to reveal whether implementation is on track and whether expected results are being achieved.
The purpose of evaluations is twofold: to ensure accountability to stakeholders and to learn to improve development outcomes. The subject of a USAID evaluation may include any level of USAID programming, from a policy to a strategy to a project, individual award, activity, intervention, or even cross-cutting programmatic priority.
USAID Automated Directives System (ADS) 201 and its associated references provide the foundation for all USAID guidance on evaluation at USAID.
As noted in ADS 201.3.6.2, evaluations at USAID should be:
- Integrated into the Design of Strategies, Projects, and Activities
- Unbiased in Measurement and Reporting, Independent, and Objective
- Relevant and Useful
- Based on Best Methods of Appropriate Rigor
- Oriented toward Reinforcing Local Ownership and National Self-Reliance
- Transparent
- Conducted According to the Highest Ethical Standards
Additional Link
- Reference: Strengthening Evidence-Based Development: Five Years of Better Evaluation Practice at USAID
2. Evaluation Throughout the Program Cycle
The USAID Program Cycle is a common set of processes intended to achieve more effective development interventions and to maximize impacts.
Evaluations may be planned, conducted, or utilized at any stage in the Program Cycle. This section addresses the various, formal stages of the Program Cycle at which Missions or Washington OUs are required or encouraged to consider whether it would be appropriate to plan for, conduct, or learn from an evaluation.
Evaluation in CDCS
Evaluation in Country Development Cooperation Strategies
A Country Development Cooperation Strategy (CDCS) articulates country-specific development hypotheses and sets forth the goal, objectives, results, indicators, and resources levels that guide Project Design and Implementation, Evaluation, and Performance Management, and inform annual planning and reporting processes.
Evaluations, along with research and other analyses, should be used to inform various sections in the CDCS, including the Development Context, Challenges and Opportunities, the Development Hypothesis, and the Results Framework.
According to ADS 201.3.2.11, the Monitoring, Evaluation, and Learning section of the CDCS includes a high-level description of the Mission's overall priorities and approach to evaluation.
In addition, Missions must incorporate USAID’s Gender Equality/Female Empowerment Policy (ADS 205) in the CDCS.
Evaluation in PMPs
Evaluation in Performance Management Plans
A Performance Management Plan (PMP) is a Mission-wide tool for planning and managing the process of monitoring strategic progress, project performance, programmatic assumptions, and operational context; evaluating performance and impact; and learning and adapting from evidence. Each Mission must prepare a Mission-wide PMP. Missions that do not have a CDCS are still required to have a PMP that covers any projects they fund (ADS 201.3.2.15).
Each Mission must prepare a Mission-wide PMP within three months of CDCS approval. Missions must keep the PMP up-to-date to reflect changes in the CDCS or projects. Missions must update the PMP with new project indicators, evaluations, and learning efforts as each new Project Appraisal Document (PAD) is approved. Missions should update information in the evaluation plan from Project and Activity MEL plans upon their approval.
Per ADS 201.3.2.16(A.II), Missions must include an evaluation plan in their Mission PMP to identify, summarize, and track all evaluations as they are planned across the Mission and over the entire CDCS timeframe by Development Objective (DO). An evaluation plan must include the following information for each planned evaluation, as it becomes available:
- The strategy, project, or activity to be evaluated;
- Evaluation purpose and expected use;
- Evaluation type (performance or impact);
- Possible evaluation questions;
- Whether it is external or internal;
- Whether it fullfills an evaluation requirement or is a non-required evaluation;
- Estimated budget;
- Planned start date; and
- Estimated completion date.
Additional Link
- Reference: Model Mission Order on Performance Monitoring. USAID only.
Evaluation in Project Design
ADS 201 defines a project as “a group of activities designed and managed in a coordinated way to advance result(s) set forth in a CDCS (or other strategic framework) and ultimately foster lasting gains along the Journey to Self-Reliance in a country or region. Through a project approach, Missions and other OUs can create synergies among complementary activities that generate higher-level results than can be achieved through the sum of their individual performances” (ADS Chapter 201 Definitions)
Project Design is “a process undertaken by a designated Project Design Team to define a project’s boundaries, a high-level theory of change, and an adaptable plan for implementation, which results in a Project Development Document.” (ADS Chapter 201 Definitions)
A project approach is a voluntary organizational framework that—when used as intended—can help Missions design and manage complementary activities in a coordinated way to generate higher-level results than individual activities can achieve. (ADS 201.3.2.14.A)
Project designs should be derived from well-documented, rigorous analysis, including evaluations. During project design, Missions consider if a project approach is the most appropriate means to advance a given result. USAID staff may use evaluations and post-evaluation action plans to identify if linkages exist across activities in the prospective portfolio and if a project approach could help facilitate linkages and address a shared development problem.
Evaluation in Activity MEL Plans
Per ADS 201.3.4, an activity carries out an intervention, or set of interventions, typically through a contract, grant, or agreement with another U.S. Government agency or with the partner country government. An activity also may be an intervention undertaken directly by Mission staff that contributes to a project, such as a policy dialogue.
In the case of an awarded activity, implementers are expected to submit an Activity Monitoring, Evaluation and Learning (MEL) Plan to their Agreement Officer’s Representative/Contracting Officer’s Representative (AOR/COR) within the first 90 days of an award (ADS 201.3.4.10).
The Activity MEL Plan should include any plans for internal evaluations, to include the type of evaluation (performance or impact), possible evaluation questions, estimated budget, planned start date, and estimated completion date. The Activity MEL Plan should also include information for ensuring that any planned external or USAID-led evaluations will have access to appropriate data collected by the implementer, such as performance monitoring data.
In the case of a Government to Government (G2G) activity, an Activity MEL Plan should be created by the Government Agreement Technical Representative (GATR) or G2G POC in collaboration with the partner government. Generally, if at the activity level, the GATR should emphasize the use of partner government M&E systems.
Evaluation in the Budget Cycle
Per ADS 201, USAID Operating Units (OU) should devote approximately 1 to 3 percent of total program funding to external evaluation on average. This does not mean that every project or activity should be evaluated or that 1 to 3 percent of the budget of every project, activity or implementing mechanism be set aside for evaluation. The actual costs of M&E may vary depending on the operating environment and the specific types of evaluations the OU plans to undertake.
The Program Office should calculate on an annual basis a budget estimate for the external evaluations to be undertaken during the following fiscal year. This estimate does not include implementing partners’ internal M&E operations. Most likely, this exercise is best done during the Operational Plan (OP) preparation time. As many external evaluations will also be implementing mechanisms, much of the needed information will need to be prepared as part of the OP.
Evaluation in Portfolio Reviews
A Portfolio Review is a periodic review of all aspects of a USAID Mission or Bureau/Independent Office (B/IO)’s Development Objective, projects, and activities, often held prior to preparing the Performance Plan and Report. Missions must conduct at least one Portfolio Review per year that focuses on progress toward strategy-level results.
Per ADS 201, the strategic Portfolio Review should consider (1) what has been learned during evaluations (along with other sources of evidence) and (2) the status of post-evaluation action plans for evaluation findings and their use in respective decisions. After the Portfolio Review, the Mission should update the PMP as needed to reflect changes in the evaluation plan.
Additionally, the Portfolio Review during the final year of the CDCS must include a review of the cumulative achievements toward the DOs and IRs, with the results documented to support knowledge management.
3. Planning an Evaluation
This content is currently under revision to align with the recently updated ADS 201. For guidance and support associated with revisions to ADS 201, please see the Program Cycle overview page.
Section 2 noted the various stages of the Program Cycle where Missions or Washington OUs should formally consider evaluation needs and requirements, including Mission-wide evaluation planning. This section addresses the planning phase for an individual evaluation, from the decision to evaluate to the procurement of evaluation services.
Ideally, evaluation planning should start during the project or activity design stage. This will help ensure that a project or activity and its monitoring system are designed with the planned evaluation in mind. However, the decision to evaluate a strategy, project, or activity may occur at any time in the Program Cycle as new evaluation needs are recognized. In addition, evaluations should be timed so that their findings can inform decision making (for example, exercising option years, designing a follow-on activity, making mid-course corrections, creating a country or sector strategic plan, or making a policy decision). For a typical performance evaluation, this means the process to solicit an evaluation should begin at least 12–18 months in advance of a decision point.
While early planning is beneficial for all evaluations, it is particularly important for impact evaluations. These studies parallel the life of a project or activity and sometimes require substantial modifications to the design of interventions (e.g. randomized assignment of treatment and control groups, modifications to selection criteria, modifications to roll-out timing, etc.). Understanding impact evaluation requirements at an early stage can help inform the drafting of implementing partner agreements in a way that builds implementer/evaluator cooperation and communicates how the evaluation will affect implementation.
In planning an individual evaluation, sufficient time should be allocated to:
- Draft a strong Statement of Work (SOW) that is peer reviewed prior to finalizing;
- Develop an Independent Government Cost Estimate (IGCE);
- Commission the evaluation to allow partners several weeks to prepare and respond;
- Review proposals and select a finalist;
- Award the contract;
- Conduct the evaluation using high-quality methods; and
- Review, reflect upon, and act on the evaluation findings, conclusions, and recommendations.
For Missions and Washington OUs with M&E Support contracts, the steps outlined here may be somewhat different because the contract to conduct the evaluation may have already been awarded.
Deciding to Evaluate
The decision to evaluate a strategy, project, or activity should be based on the decision-making needs of a Mission or Washington OU, the policy requirements for evaluation, learning needs, and practical considerations.
Evaluations are required in three instances (see ADS 201.3.6.5):
- Each OU or Mission with a CDCS, RDCS or other strategy must conduct at least one evaluation per intermediate result (IR) defined in the OU’s strategy. This evaluation can focus on any level within the IR: intervention, activity, set of activities, or the intermediate result as a whole.
- Each Mission and Washington OU must conduct an impact evaluation, if feasible, of any new, untested approach that is anticipated to be expanded in scale or scope through U.S. Government foreign assistance or other funding sources. (This evaluation may count as one of the evaluations required under Requirement 1.)
- OUs must conduct at least one evaluation per activity (contracts, orders, grants, and cooperative agreements) with a TEC/TEA expected to be $20 million or more. (This evaluation may count as one of the evaluations required under Requirement 1.)
In these cases, decisions need to be made about when and how these projects will be evaluated.
Strategies/projects/activities that are not required to be evaluated may still be evaluated at any point in implementation for learning or management purposes. In this case, decisions need to be made about whether to evaluate, what type of evaluation (performance or impact) to conduct, what type of evaluation team (internal or external) would be appropriate, and when the evaluation should be conducted. In the case of potentially large, expensive, or lengthy evaluations (particularly impact evaluations), an evaluability assessment may be a worthwhile investment prior to planning an evaluation.
Engaging with Stakeholders
Collaboration is a principle that is integral to all stages of the USAID Program Cycle, including evaluation. For an evaluation to successfully contribute to USAID development results, the Evaluation Point of Contact (POC) and other USAID staff involved in the evaluation must productively engage and collaborate with key stakeholders in the evaluation—USAID staff across various offices, implementing partners, host country officials, beneficiaries, etc. These stakeholders may contribute to the planning and implementation of the evaluation, serve as primary or secondary audiences for evaluation products, and/or serve as critical actors in ensuring that evaluation evidence is utilized effectively.
Consequently, it is prudent to start identifying and engaging with key stakeholders as early as possible in the evaluation process. A stakeholder analysis is one simple way to start the process of identifying stakeholders and determining how to best collaborate with them throughout the evaluation. At a minimum, Project Managers and AOR/CORs are responsible for ensuring that implementing partners (IPs) of the activity or project that will be evaluated are aware of any planned evaluations and the steps IPs need to take to ensure a successful evaluation.
Similarly, early dissemination planning for evaluation is critical. According to ADS 201.3.6.10: "OUs also should distribute evaluation results widely, to both internal and external stakeholders” and “Missions and OUs should update and follow the Evaluation Dissemination Plan developed during the evaluation planning stage..." Evaluations of all types will include a dissemination plan. Such dissemination plans can help ensure that appropriate evaluation products are planned and developed to meet stakeholder needs and fulfill USAID’s commitment to transparency, accountability, and learning.
Determining Evaluation Purpose and Evaluation Questions
As early as possible in the evaluation planning phase, the Mission or Washington Operating Unit needs to consider the purpose and audience of the evaluation and the key questions that the evaluation will address. Ideally, the evaluation purpose and questions will be developed during the design of the project or activity to be evaluated. The process of developing the evaluation questions may even inform the decision to evaluate or not.
Evaluation questions are typically developed by the Development Objective team or Technical Office managing the strategy, project, or activity being evaluated in coordination with the Program Office, which will manage the evaluation in most cases. However, evaluation questions may be based on input from Washington offices, Mission leadership, or other stakeholders, such as implementing partners and host governments. Adequate consultation is essential when defining the evaluation purpose and evaluation questions to ensure that evaluation findings will be credible, relevant, and actionable for decision-makers.
Additional Links
- Webinar: Developing Good Evaluation Questions.
- Resource: Evidence-based practical guidance and templates on how to successfully design and implement a Developmental Evaluation for funders of DEs.
Developing an Evaluation SOW
This content is currently under revision to align with the recently updated ADS 201. For guidance and support associated with revisions to ADS 201, please see the Program Cycle overview page.
The development of an Evaluation Statement of Work (SOW) is one of the most significant steps in the evaluation planning process. The SOW communicates to the evaluation team why the evaluation is needed, how it will be used, and what evaluation questions managers need answers to. Before finalizing the Evaluation SOW, the Mission or Washington Operating Unit Program Office will organize an in-house peer technical review of the Evaluation SOW including no less than two individuals in addition to the Program Office Evaluation POC (or designee). Relevant and non-procurement sensitive parts of the SOW may also be shared with external stakeholders as needed and appropriate. The Program Office is responsible for ensuring that the SOW is compliant with ADS 201mab, USAID Evaluation Statement of Work Requirements. Most of the guidance in this section assumes that USAID is the author of the SOW for a competitively procured external performance evaluation. However, SOWs may differ for internal evaluations, for evaluations planned within an existing evaluation contract, or for more complicated impact evaluations.
Additional Links
- Webinar: Good Practices for Peer Reviews of Evaluation SOWs.
- Tool: Assessing the Quality of Education Evaluations.
- Resource: A Commissioner's Guide to Probability Sampling for Surveys at USAID
- Resource: Evidence-based practical guidance and templates on how to successfully design and implement a Developmental Evaluation for funders of DEs.
Developing an Evaluation IGCE
This content is currently under revision to align with the recently updated ADS 201. For guidance and support associated with revisions to ADS 201, please see the Program Cycle overview page.
The Evaluation IGCE should be developed concurrently with the Evaluation SOW and should be available to those involved in the peer review of the SOW. The IGCE for an evaluation should follow directly from the information included in the SOW. For instance, the number and complexity of questions, along with the proposed data collection and analysis methods in the methodology section and the team composition requirements in the Evaluation SOW should all be reflected in the IGCE.
Additional Links
- Guidance and Tool: ADS 300maa Independent Government Cost Estimate (IGCE) Guide and Template.
- Guidance and Tool: Independent Government Cost Estimate (IGCE) Guide and Template in Excel. USAID Only
Commissioning an Evaluation
This content is currently under revision to align with the recently updated ADS 201. For guidance and support associated with revisions to ADS 201, please see the Program Cycle overview page.
Part of the value-added function of the Program Office is to suggest potential implementing mechanisms for carrying out the evaluation. There are numerous options for procuring evaluation services, including field and Washington mechanisms supported by a variety of USAID offices. Missions should also consider the use of local evaluation contractors. If USAID staff is expected to participate on the evaluation team, that expectation should be acknowledged in the solicitation.
Although procurement typically occurs at the end of the planning phase of the evaluation process, the Evaluation POC (or designee) should work with their contract officers to consider possible mechanisms early in the planning process, since the choice of mechanism can have implications on the budget and timing of the evaluation.
Special consideration should be taken with regard to timing of contracts for impact evaluations. In cases where impact evaluations are undertaken, it is a good practice to establish a parallel award at the inception of the intervention to accompany implementation. If possible, the evaluation team should be in place before implementation starts in order to conduct the baseline and provide guidance related to the selection of treatment and comparison/control groups.
Additional Links
- Reference: ADS 300: Agency Acquisition and Assistance (A&A) Planning.
- Reference: Monitoring and Evaluation Platforms: Considerations for Design and Implementation Based on a Survey of Current Practices (Sept 2013).
- Reference: Monitoring, Evaluation and Learning Platforms Assessment Report (2017)
- Tool: Interactive Map of VOPEs (Voluntary Organizations of Professional Evaluators).
- Reference: The Ideal Prospective Impact Evaluation Timeline.
- Resource: A Commissioner's Guide to Probability Sampling for Surveys at USAID
- Resource: Evidence-based practical guidance and templates on how to successfully design and implement a Developmental Evaluation for funders of DEs.
- M&E Mechanisms (Field and Washington) USAID Only
4. Managing an Evaluation
This section addresses the management phase of an individual evaluation, from the period following the award of an evaluation contract to the submission of the final report. Following the award of an evaluation contract, the COR (for external evaluations) or the Evaluation Manager (for internal evaluations) serves as the main communication link between USAID and the evaluation team. In most cases, the evaluation will be managed by the Program Office (i.e., Evaluation COR/Manager is a Program Office staff member). The Evaluation COR/Manager will ensure that:
- The evaluation team’s final evaluation design meets the Agency’s needs;
- The evaluation team has access to the necessary information (e.g. project or activity reports, performance monitoring data, key contact information, etc.);
- The evaluation team is proceeding with the evaluation as envisioned;
- Coordination between evaluators and implementers is smooth; and
- A final report is reviewed, approved, and disseminated.
From Award to Approval
In the period following the award of the evaluation contract, but prior to data collection, an evaluation design is the one deliverable required by USAID policy. Other deliverables may also be due during this period, depending on what was requested in the Evaluation Statement of Work (SOW).
An evaluation design describes and documents how the data collection and analysis methods will be used to produce credible evidence for answering all of the evaluation questions within the time and budget constraints. Clear articulation of the evaluation design assists USAID and other stakeholders in discussing these choices with the evaluation team, but the level of detail in an evaluation design may vary depending on the complexity of the evaluation, overall level of effort, and other factors. An Evaluation Design Matrix is a standard tool for outlining the components of an evaluation design and is highly recommended for use by evaluation teams.
Per ADS 201.3.6.8, “Except in unusual circumstances, the key elements of the design [of the evaluation] must be shared with implementing partners of the projects or activities addressed in the evaluation and with related funders before being finalized.”
Additional products that may be required during this period typically include:
- A workplan that describes the schedule, activities, and milestones of the evaluation team;
- An inception report or background report that addresses what the evaluation team has learned based on program documents provided to them;
- An in-brief or series of in-briefs, either in person or virtual; and
- Other possible deliverables, such as an evaluability assessment.
During this design period, the Evaluation COR/Manager should consider the possibility of revising evaluation questions based on evaluation team input. Any revisions to the questions in the SOW should be documented in writing in the evaluation report. The Evaluation COR/Manager should also consider if the design complies with ethical standards for protection of human subjects.
Additional Links
Conducting an Evaluation
USAID staff members typically manage evaluations on behalf of USAID, while the design and implementation of an evaluation is typically the responsibility of an externally contracted evaluation team. USAID staff may participate in these external evaluations, provided that the team leader is an externally contracted evaluator with no fiduciary relationship with the implementing partner. In addition, USAID staff may lead and/or participate in internal evaluations.
Whether leading an evaluation, managing an evaluation, participating on an evaluation team, or just reviewing an evaluation report, it is beneficial to be familiar with the typical designs and methodologies of USAID evaluations. The field of program evaluation is quite diverse, and numerous books, journals, and websites are dedicated to describing the various approaches, models, designs, methods, techniques, and practices in conducting program evaluations. This section provides some limited guidance on designs and methods for conducting an evaluation for USAID.
ADS 201 emphasizes high-quality evaluation methods. It notes:
“Evaluations will use methods that generate the highest quality and most credible evidence that corresponds to the questions being asked, taking into consideration time, budget, and other practical considerations. A combination of qualitative and quantitative methods applied in a systematic and structured way yields valuable findings and is often optimal regardless of evaluation design. Impact evaluations must use experimental methods (randomization) or quasi-experimental methods, and may supplement these with other qualitative or quantitative methods to increase understanding of how or why an intervention achieved or did not achieve an expected impact." (ADS 201.3.6.7)
"A single evaluation can be designed to use a variety of methods and to address a variety of purposes. In cases in which an evaluation uses methods that meet the definition of an impact evaluation, but also uses other methods to address questions more commonly addressed in performance evaluations, the evaluation will be classified as an impact evaluation.
The selection of method or methods for a particular evaluation should consider the appropriateness of the evaluation’s design for answering the evaluation questions and the availability and accessibility of primary and/or secondary data, as well as balance cost, feasibility, and the level of rigor needed to inform specific decisions.” (ADS 201.3.6.4)
Additional Links
- Resource: The Road to Results: Designing and Conducting Effective Development Evaluations.
- Resource: Impact Evaluation in Practice.
- Resource: A Commissioner's Guide to Probability Sampling for Surveys at USAID
- Resource: Real World Evaluation: Working Under Budget, Time, Data, and Political Constraints: A Condensed Summary Overview.
- Resource: Evidence-based practical guidance and templates on how to successfully design and implement a Developmental Evaluation for Evaluators and firms that manage DEs.
Managing and Monitoring an Evaluation Team
The responsibilities of an Evaluation COR/Manager for an evaluation contract or task order are technically no different than the responsibilities of a COR for any other implementing mechanism. In practice, though, managing the implementation of an evaluation differs in significant ways from managing the implementation of other development activities.
Evaluations typically have much more compressed timelines compared to other activities, requiring quick responses and adaptations when problems arise.
Evaluations also rely heavily on the cooperation of other USAID partners. The Evaluation COR/Manager must help mediate and manage the relationship between the evaluation team and the implementing partner being evaluated. This relationship can place a considerable burden on the implementing partner as they assist the evaluation team in obtaining documents, participating in interviews, and facilitating access to beneficiaries. This is particularly true for experimental designs in impact evaluations, which require that the implementing partner adhere to pre-specified treatment and control groups.
Finally, managing an external evaluation team requires the Evaluation COR/Manager to carefully balance USAID’s involvement to ensure a high-quality, useful, and on-time product with the need to protect the independence of the evaluators and the evaluation report.
From Draft to Final Report
The requirements for evaluation report structure and content are detailed in the mandatory references for ADS 201:
- ADS 201maa, Criteria to Ensure the Quality of the Evaluation Report
- ADS 201mah, USAID Evaluation Report Requirements
Evaluation reports should represent a thoughtful, well-researched, and well-organized effort to objectively evaluate the subject of the evaluation (e.g., strategy, project, activity) and should be readily understood, identifying key points clearly, distinctly and succinctly.
Evaluation findings should be presented as analyzed facts, evidence, and data and not based on anecdotes, hearsay, or simply the compilation of people's opinions. Conclusions should clearly be based on the evaluation findings.
Evaluation methodology should be explained in detail and sources of information properly identified. Limitations to the evaluation should be adequately disclosed in the report, with particular attention to the limitations associated with the evaluation methodology (selection bias, recall bias, unobservable differences between comparator groups, etc.).
To ensure a high-quality evaluation report, the draft report must undergo a peer review organized by the office that is managing the evaluation. The OU should review the evaluation report against ADS 201maa, Criteria to Ensure the Quality of the Evaluation Report. OUs may also involve peers from relevant Regional and/or Technical Bureaus in the review process as appropriate (see ADS 201sai, Managing the Peer Review of a Draft Evaluation Report).
The draft report must be shared with implementing partners whose projects (or activities) are examined in the evaluation and other organizations who contributed funding to the evaluation or the project/activity being evaluated. Funders, implementers, and members of the evaluation team are to be provided with the opportunity to write a “statement of differences” to address any unresolved differences of opinion to be appended to the final evaluation report.
Additional Links
- Guidance: USAID Graphic Standards Manual.
- Tool: Checklist for reviewing a randomized controlled trial.
- Tool: Assessing the Quality of Education Evaluations.
5. Sharing, Using, and Learning
Completing an evaluation report is not the end of the evaluation process. As the evaluation report moves toward completion, the Mission or Washington OU that commissioned the evaluation enters into the key phase of sharing, reporting, using, and learning from the evaluation.
Sharing: Transparency is a key practice of evaluation at USAID. As noted in ADS 201.3.6.2, “Evaluation must be transparent in the planning, implementation, and reporting phases to enable accountability," and "USAID commits to full and active disclosure and will share findings from evaluations as widely as possible." At a minimum, this requires the posting of evaluation reports to the Development Experience Clearinghouse (DEC) and evaluation data to the Data Development Library (DDL).
Reporting: In ADS 201.3.6.10 it states that “Missions and OUs should update and follow the Evaluation Dissemination Plan developed during the evaluation planning stage and consider dissemination channels in addition to posting the evaluation report and data, such as slide decks, videos, infographics, visualizations, podcasts, or other means of sharing the evaluation findings.”
Using and Learning: To help ensure that institutional learning takes place and evaluation findings are used to improve development outcomes, Mission and Washington OUs must develop a Post-Evaluation Action Plan upon completion of an evaluation, with a designated point of contact who will be responsible for overseeing the implementation of the action plan. OUs must review the status of actions across Post-Evaluation Action Plans during Mission portfolio reviews and document when actions are complete (ADS 201.3.6.10).
Sharing
Missions and Washington Operating Units (OU) should share and openly discuss evaluation findings, conclusions, and recommendations with relevant partners, donors, and stakeholders, unless there are unusual and compelling reasons not to do so.
Missions and Washington OUs should revisit their evaluation stakeholder analysis and dissemination plan toward the conclusion of an evaluation to ensure that it still reflects the priorities for dissemination. While sharing the evaluation report is the most typical form of dissemination, Missions and Washington OUs should also consider other methods of dissemination, such as hosting briefings with local stakeholders, partners, and other donors to discuss evaluation findings; featuring evaluation findings on their website, such as through articles or blog posts; and holding press conferences and issuing press releases.
In many cases, USAID Missions should arrange the translation of the executive summary into the local written language.
Two forms of evaluation report sharing are required:
- The Program Office must ensure that the final evaluation report is posted on the Development Experience Clearinghouse (DEC) no later than three months after completion. Exceptions to this requirement are granted only in very rare circumstances (see Guidance on Exemptions to Public Disclosure of USAID-funded Evaluations).
- In addition to posting the evaluation report to the DEC, Program Offices must post quantitative data from the evaluation to the Data Development Library.
Evaluation Registry
The annual Performance Plan and Report (PPR) documents USG foreign assistance results achieved over the past fiscal year and sets targets on designated performance indicators for the next two fiscal years. The PPR also includes the Evaluation Registry as a sub-module for documenting each Mission and Washington Operating Unit’s work on evaluation.
The Evaluation Registry includes information on evaluations completed in the most recent fiscal year, evaluations that are currently ongoing, and evaluations planned for the current year and two additional out-years. This includes required, non-required, external and internal evaluations. As of FY2013, the data from the Evaluation Registry are also used to calculate the targets and actuals for the USAID Forward Evaluation Indicator. Evaluation status and budget data from the Evaluation Registry is critical to helping USAID understand the number of evaluations completed across the Agency, the totality of budget resources being devoted to evaluation, and trends over the fiscal years. These data also help USAID demonstrate to external stakeholders, such as the White House’s Office of Management and Budget, the priority that USAID places on evaluation.
Additional Links
- Reference: Annual Performance Plan and Report Guidance. USAID only.
Utilization and Learning
The value of an evaluation is in its use. Evaluations should inform decision-making, contribute to learning, and help improve the quality of development programs. At minimum, USAID Program Offices should lead Missions and Washington Operating Units through the process of:
- Reviewing findings, conclusions, and recommendations of evaluations that relate to their activities, projects, and DOs;
- Identifying any management or program actions needed; and
- Assigning responsibility and timelines for completion for each set of actions.
While learning and utilization are most often considered at the conclusion of the evaluation process, learning and utilization can happen at various phases in the evaluation process and stages of the Program Cycle. Evaluation use and learning may occur before or during the evaluation, shortly after it is completed, or long after the findings have been presented. It may occur during the development of a CDCS, the design of a project or activity, or portfolio review. Utilization and learning should be planned for and actively facilitated whenever evaluation use occurs.
Assessing the Evaluation Process and Evaluator Performance
Following the completion of the evaluation, the Evaluation COR/Manager and others involved in the evaluation should consider not just the content of the evaluation report, but what they have learned from the entire evaluation process that might help in conducting the next evaluation. An After-Action Review (AAR) is one formal means of capturing such lessons.
If the evaluation was contracted, the Evaluation COR should, when applicable, access the Contractor Performance Assessment Reporting System (CPARS) to file a Contractor Performance Assessment Report within 60 days of the completed evaluation. CORs completing an assessment report should ensure that the report contains an accurate portrayal of the contractor’s performance. Contractors utilize the completed past performance reports when responding to solicitations. These reports are also used by Contract Officers and CORs when assessing the past performance of contractors and to incentivize contractors to produce superior products and services.
Additional Links
- Guidance: After Action Reviews (AAR)
- Guidance: User Manual for Contractor Performance Assessment Reporting System (CPARS).