PDIA: One can find or build political support (Part 2/4)

written by Matt Andrews

This is the second of the four common excuses that I hear about why PDIA cannot be done in development. If you are interested, you can read the first one.

Excuse 2:  International development experts often tell me that PDIA is not possible because politicians will never support it.

Again, simply not true. It is true that many politicians will look for big projects promising large things. This is what I call signaling in my book and is a major constraint in many countries. I think it is facilitated by donors who offer large loans in response to big promises for best practice, which often leads to a ‘what you see is not what you get‘ situation. But my research shows that there are reforms that yield functional improvements in government. And studies of these reforms suggest that politicians can also welcome and support PDIA type processes that go beyond signaling. Where a locally felt problem exists, it is clear to me that politicians are often very interested in processes that promise real solutions. And many politicians are aware that these solutions need to emerge gradually so that they are properly authorized and capacitated.

So a PDIA approach of active and iterative engagement is not foreign or unwelcome in such situations. Indeed, I see many politicians creating a holding environment for such engagement. These politicians value the tight feedback loops and the rapid opportunities to learn and build capacity in their organizations. They also like the quick wins, especially when these wins feed into broader narratives about solving problems and promoting development. Indeed, tight iteration may overcome the time inconsistency problem we often see in reform (where politicians need results quicker than a large multiuser project can deliver). Read the Burundi post for an example.

PDIA2

PDIA: International organizations have flexible instruments (Part 1/4)

written by Matt Andrews

Almost every time I give a presentation on PDIA (and I have given many), I hear excuses about why PDIA cannot be done in development. So, I’ve decided to set the record straight. I am writing a blog post and drawing a picture for each of the four most common excuses I hear. This is the first one.

Excuse 1: International development experts often tell me that they cannot do PDIA because they have to produce projects and project processes don’t allow the flexibility implied in PDIA.

This is simply not true. Every development agency I know of, has traditional project mechanisms that are rigid and foster disciplined process BUT every development agency also has instruments that allow experimentation and flexibility. The names of these instruments differ but common tools have (over time) included trust funds, learning and innovation loans, adaptable projects, and even some results based loans.

So, development experts can find tools to do flexible problem identification and active project design and implementation IF THEY KNOW THESE ALTERNATIVES EXIST AND TAKE THE EFFORT TO USE THEM. If they choose not to use these alternatives because they are risky, or hard, or different, that is one thing. But experts should stop saying that these alternatives do not exist. If you want an example, read the PDIA in Cameroon blog post.

PDIA1

The Studley Tool Chest

In the spirit of Thanksgiving, I wanted to share the story of the image we chose for this blog – the Studley Tool Chest.

Designed by piano maker Henry O. Studley (1838-1925), this toolbox is about 40 inches by 20 inches when closed, and holds approximately 300 tools. Apparently, it is so heavy that it takes 3 strong people to put it up on the wall. He developed and added new tools, over the period of 30 years, adapting and ensuring that every tool fit snugly in its space. The craftsmanship is extraordinary and it remains in a class of its own. It has been exhibited at the Smithsonian National Museum of American History. Here’s a video if you are interested in seeing it.

Studley Tool Chest

What’s in a counterfactual?

written by Salimah Samji

I am amazed by people’s obsession with the counterfactual, and how evidence cannot exist without it. Why are people so enamored by the idea of ‘the solution’ even though we have learned time and time again that there is no one size fits all?

Is the existence of a counterfactual a sufficient condition? Why don’t people ask questions about the design and implementation of the evaluation? Specifically:

  • What are you measuring and what is the nature of your context: Where in the design space are you? Is your fitness landscape smooth or rugged? Eppstein et al. in Searching the Clinical Fitness Landscape, test two approaches (multicenter randomized control trials vs. quality improvement collaboratives where you work with others, learn from collective experience, and customize based on local context), to identify which leads to healthcare improvements. They find that the quality improvement collaboratives are most effective in the complex socio-technical environments of healthcare institutions. Basically, the moment you introduce any complexity (increased interactions between variables) experiential methods trump experimental ones.
  • Who is collecting your data and how: Collecting data is a tedious task and the incentive to fill out surveys without having to go to the village is high, especially if no one is watching. Then there are questions of what you ask, where you ask, how you ask, what time period it is, how long the questionnaire is, etc.
  • How is the data entered and verified: Do you do random checks? Double data entry?
  • Is the data publicly available for scrutiny?

And then there is the external validity problem. Counterfactual or not, it is crucial to adapt development interventions to local contextual realities, where high quality implementation is paramount to success. Bold et al. in Scaling Up What Works: Experimental Evidence on External Validity in Kenyan Education, find that while NGO implementation of contract teachers in Kenya produces a positive effect on test scores, government implementation of the same program yielded zero effect. They cite implementation constraints and the political economy forces in play as reasons for the stark difference. In a paper entitled, using case studies to explore the external validity of ‘complex’ development interventions, Michael Woolcock argues for deploying case studies to better identify the conditions under which diverse outcomes are observed, with a focus on contextual idiosyncrasies, implementation capabilities and trajectories of change.

To top it off, today’s graduate students in economics don’t read Hirschman (some have never heard of him!) … should we be worried?

PDIA in Cameroon

written by Salimah Samji

In a recent paper entitled, Behavioral Economics and Public Sector Reform: An Accidental Experiment and Lessons from Cameroon, Gael Raballand and Anand Rajaram compare two World Bank projects in Cameroon: a $15 million, 5 year, Transparency and Accountability Capacity Development Project (TACD), and a $300,000, low-profile technical assistance project to improve performance in Cameroon customs.

The TACD story is all too familiar. It became effective one year later than expected, had only disbursed 10% after two years of implementation, and despite high level management attention and discussions with the country leadership, little changed. A mutual decision was taken to close the TACD one year prior to its close date citing poor coordination, lack of organization skills, systems in need of upgrading, and a lack of political commitment among the reasons.

The second is a ‘pockets’ of effectiveness story. The Director General of Customs requested assistance from the World Bank, to help improve performance management. The project was funded by a grant and was limited to knowledge transfer and technical assistance. In order to design the project, the team used the Bank’s in-house expertise coupled with that of a customs officer who had institutional knowledge of customs issues and understood the context as he had been an adviser to Cameroon. The pilot began with performance contracts in 2 offices for 6 months. It was focused on non-financial incentives. For good performance, congratulatory letters were placed in the files and publicly disseminated for wider recognition, and for bad behavior, team interviews, warnings, possible disciplinary action as well as removal from their position. In less than 2 months the clearance process was much faster, the attitude of the customs officers improved and revenues increased. An additional $16.5 million in revenue had been collected over the 6 months.

The second story is also a great example of PDIA principles in action.

  • Problem Driven: The problem was identified and nominated by the country. The focus was on solving problems as opposed to retro-fitting solutions. This also helps build ownership.
  • Crawl the Design Space: They carefully planned the pilot keeping the context in mind.
  • Try, Learn, Iterate, Adapt: The size and flexibility of the pilot allowed them to experiment. The tight feedback loops built-in to track the pilot allowed them to learn and adapt. They also built trust and credibility by imposing sanctions and rewarding performance.
  • Create/Maintain Authorizing Environment: The gradual buy-in of a key number of agents below the head of customs helped keep the pilot on track. This idea of multi-agent leadership is discussed in Matt Andrew’s recent paper Who Really Leads Development?

The authors conclude “the experience suggests that with institutional reforms, implementing a series of small well-designed changes may hold more hope for behavioral change than a large but ineffective reform that presumes the capacity for internal leadership of a complex reform.’ In reality, development projects involve people, who are the ultimate complex phenomena, embedded in organizations, which are complex, and organizations are embedded in rules systems (e.g. institutions, cultures, norms), which are themselves complex.

Aid and Fragility: PDIA at the UN

Earlier today, Lant Pritchett, Michael Woolcock, and Frauke de Weijer were on a panel for the Fragility and Aid: What Works? event held by the UNU-WIDER at the Permanent Mission of Germany to the UN. They discussed how even well-meaning attempts to “build capacity” could serve as techniques of persistent failure because of isomorphic mimicry (emphasis on form over function) which allows for continued dysfunction, and premature load bearing (too much too soon) which builds mistrust and cynicism whereby the donor decides on what needs to be done, but the country gets blamed for the failure, setting off a vicious cycle of bad institutions. They also discussed how PDIA might be used in fragile states.

Lant Pritchett and Michael Woolcock
Lant Pritchett and Michael Woolcock

Untying Development

Yesterday, we hosted a one-day workshop entitled, Untying Development: Promoting Governance and Government with Impact. The day brought together different voices to discuss the challenge of creating a governance agenda that focuses on solving country-specific problems, involves local people through flexible and context-fitted processes, and emphasizes learning in the reform process.

In the first session, Francis Fukuyama highlighted the need for public administration programs to shift the focus from management back to implementation. He stressed the need for more granular governance indicators and better ways to measure the implementation of government public services. The second and third sessions were focused on unleashing local agents for change, and on new practice in action. In the fourth and final session on useful evaluation, Bob Klitgaard spoke about kindling creative problem solving by using a combination of theory and examples that are Specific, Unexpected, Concrete, Credible, Emotional and Stories (the acronym SUCCES in Made to Stick). The agenda as well as the videos of the sessions can be found here.

This builds on work emerging in our Building State Capability program (including the recent book by Matt Andrews).