Why We Should Stop Asking “What Works in Democracy Assistance”
As the United States and other donors devote resources to democracy assistance in post-conflict countries, an obvious concern is that this money be spent on effective programs that really contribute to greater democratization. In this quest to optimize investments and impact, donors and implementers alike have been asking important questions regarding how we can gather and use evidence to design better interventions. It was with this goal in mind that the International Foundation for Electoral Systems (IFES) conducted a reflection exercise on 25 years of programmatic history. Supported by USAID, this exercise entailed a thorough analysis of program reports and interviews with more than one hundred former implementers and partners involved in projects in 18 conflict-affected countries. The products of this effort are two reports that 1) document common challenges in the implementation of programs and strategies to overcome them and 2) provide recommendations to increase the success of interventions and their likelihood of yielding sustainable results.
Reflecting on past programs and analyzing democracy assistance interventions more systematically did lead to important insights and a series of approaches to increase effectiveness. The problem, however, is that answers to the question “what works?” are rarely simple. As each country presented its own multitude of contextual variables, judging the merit of interventions by their design and scope alone is just not possible. Similar capacity-building work with election management bodies or voter engagement campaigns with civil society could work well in one country, for example, leading to more legitimate elections and engaged voters, but fail to achieve these goals in another.
What we found instead was that there were several factors moderating – influencing for better or worse – the impact of interventions. We share some of the most relevant of these factors below:
Level of security and government repression – In some countries analyzed during this project, partners recognized that, although planned activities were important and resources were available to implement them, target audiences often felt intimidated by general insecurity and the risk of potential retaliation by powerful local actors. As a former partner in the Democratic Republic of Congo shared, “certain regions were so insecure that civic educators could not access them. Participants also refused to sign attendance lists, fearing they could be used by armed groups to target and retaliate against them.”
Persistent social cleavages – Democracy assistance implementers often aim to create local coalitions of partners to foster better communication and set the stage for long-term collaboration. While this strategy can be the basis for sustainable local working groups, in some cases, existing rivalries and competition are too strong to overcome, and interventions that require this immediate collaboration might not work as intended. As an interviewee stated, “we can’t just go and start implementing a democracy and governance program neglecting that that country has a history, that people have a history — and that they might not want to work together because of that history.”
Political will – As another interviewee explained, “we might have beautifully designed programs fail and programs that are just mediocre on paper succeed because there was political will to implement them.” Even if a program is perfectly designed to match a specific context, its success will invariably depend on local buy-in. And obtaining this buy-in is not always easy, as implementers can be perceived as outsiders, untrustworthy, and self-interested, especially when they arrive at a critical juncture for the future of the country. Moreover, in some contexts, local actors might simply not see the value of the assistance, refusing to take on more work or responsibility if they foresee no gains from the extra efforts.
Management and administrative capacity of partners – Although political will is necessary, it is not enough to guarantee local partners will be able to optimize assistance and turn it into sustainable local gains. In our sample, capacity-strengthening programs with a good record in certain countries could not yield the same results in others because partners in the latter did not have the internal capacity, resources, or bandwidth to take the role donors and implementers had envisioned for them. Especially in these cases, extra time to provide basic training and equip partners to be able to roll out more complex activities is what can dictate the ultimate success of the investment.
The main lesson learned here is that, although more efforts to better understand the weaknesses and strengths of programs are very much needed, we should remain wary of categorical verdicts on specific activities. Doing so might risk turning a successful intervention from one context into a failure in another, or prematurely dismissing a potentially effective activity based on its poor record in unfavorable environments.
Instead of asking ourselves which interventions work, we should be asking in which circumstances they work, and how we can better design programs to respond to adverse environments.
This project was funded by the United States Agency for International Development (USAID) through the Consortium for Elections and Political Process Strengthening (CEPPS). The analysis covers democracy and governance projects implemented by CEPPS from 1995 to 2019 in 18 post-conflict countries. To access the full report, please visit: New IFES Report on Democracy and Governance in Post-conflict Countries | IFES