A strong evaluation methodology isn’t enough. In a complex world of multiple stakeholders, clear lines of communication between evaluation teams, field officers, home offices, and—most importantly—USAID Missions can make the difference between success and failure. With so many stakeholders it can be easy for lines of communications to get crossed which could result in a negative outcome for the evaluation. Furthermore, in the era of COVID-19, stakeholders are unable to gather in the same room to build consensus and share ideas. Given all these challenges, there are two pitfalls an evaluation time might fall into, which are only amplified when evaluation teams are working predominantly online: a failure to establish clear operational implementation practices and a failure to plan quality assurance checks.
Although each evaluation is different, often times most operate from a similar timeline. These similarities allow for operational implementation practices to be established within a firm. For example, establishing templates for common deliverables such as the workplan, inception report, and final evaluation report. Likewise, creating outlines for commonly used tools such as questionnaires and consent scripts mean that evaluation teams do not need to reinvent the wheel with every evaluation. These practices are particularly useful during COVID-19 since evaluation teams are not able to gather in the same room and make strategic plans. When evaluation teams and their strategic partners were able to gather in the same room, it was easy to talk through operational implementation practices with the entire team and make strategic plans for the future. However, because communication is now predominately online, operational implementation practices must be clearly established ahead of time. This type of preplanning removes a pitfall of failing to establish clear operational implementation practices for the evaluation team, and instead sets the evaluation up for success.
Similarly, it is important to pre-plan quality assurance checks. Because evaluations involve so many stakeholders there are several layers of quality assurance that should be baked into each evaluation. Establishing checklists and rubrics for each stage of the evaluation and distributing them ahead of time automatically assures that all parties understand how deliverables will be assessed before submission to the client.
Depending on the phase in the evaluation, these checklists should then be reviewed by different layers of stakeholders. For example, internal deliverables should be reviewed and assessed by the evaluation team lead, and field office counterpart, while Mission deliverables mandated in the scope of work should have another layer of review and additionally be examined by the home office. These practices are particularly important during COVID-19 since previously stakeholders were working side by side in the field, which allowed for on-the-fly quality assurance; unfortunately, this is not currently possible. Once again pre-planning for how deliverables will be reviewed and when stakeholders should provide quality assurance eliminates this pitfall of failing to plan quality assurance checks, and instead sets the evaluation team up for success.
In order to avoid these pitfalls and ensure quality evaluation reports, DevTech has implemented practices to update, streamline, and clarify communications between all the strategic partners and stakeholders taking part in an evaluation. By mapping out the evaluation process ahead of time, and pre-designating check-in meetings and quality assurance checks throughout the process, the evaluation teams can make up for the loss of in-person communication and set the evaluation team up to develop a high-quality evaluation report. Below is a sample evaluation process timeline, with operational implementation practices (templates & tools), and quality assurance practices (rubrics/and check-ins) built into the timeline.
Evaluation Process
Source: Author
Systems like these help DevTech Systems Inc. consistently produce high-quality evaluations and show our commitment to using collaborative learning and adaptive practices to help improve and evolve the evaluation process.