PDIA: International organizations have flexible instruments (Part 1/4)

written by Matt Andrews

Almost every time I give a presentation on PDIA (and I have given many), I hear excuses about why PDIA cannot be done in development. So, I’ve decided to set the record straight. I am writing a blog post and drawing a picture for each of the four most common excuses I hear. This is the first one.

Excuse 1: International development experts often tell me that they cannot do PDIA because they have to produce projects and project processes don’t allow the flexibility implied in PDIA.

This is simply not true. Every development agency I know of, has traditional project mechanisms that are rigid and foster disciplined process BUT every development agency also has instruments that allow experimentation and flexibility. The names of these instruments differ but common tools have (over time) included trust funds, learning and innovation loans, adaptable projects, and even some results based loans.

So, development experts can find tools to do flexible problem identification and active project design and implementation IF THEY KNOW THESE ALTERNATIVES EXIST AND TAKE THE EFFORT TO USE THEM. If they choose not to use these alternatives because they are risky, or hard, or different, that is one thing. But experts should stop saying that these alternatives do not exist. If you want an example, read the PDIA in Cameroon blog post.

PDIA1

The Studley Tool Chest

In the spirit of Thanksgiving, I wanted to share the story of the image we chose for this blog – the Studley Tool Chest.

Designed by piano maker Henry O. Studley (1838-1925), this toolbox is about 40 inches by 20 inches when closed, and holds approximately 300 tools. Apparently, it is so heavy that it takes 3 strong people to put it up on the wall. He developed and added new tools, over the period of 30 years, adapting and ensuring that every tool fit snugly in its space. The craftsmanship is extraordinary and it remains in a class of its own. It has been exhibited at the Smithsonian National Museum of American History. Here’s a video if you are interested in seeing it.

Studley Tool Chest

What’s in a counterfactual?

written by Salimah Samji

I am amazed by people’s obsession with the counterfactual, and how evidence cannot exist without it. Why are people so enamored by the idea of ‘the solution’ even though we have learned time and time again that there is no one size fits all?

Is the existence of a counterfactual a sufficient condition? Why don’t people ask questions about the design and implementation of the evaluation? Specifically:

  • What are you measuring and what is the nature of your context: Where in the design space are you? Is your fitness landscape smooth or rugged? Eppstein et al. in Searching the Clinical Fitness Landscape, test two approaches (multicenter randomized control trials vs. quality improvement collaboratives where you work with others, learn from collective experience, and customize based on local context), to identify which leads to healthcare improvements. They find that the quality improvement collaboratives are most effective in the complex socio-technical environments of healthcare institutions. Basically, the moment you introduce any complexity (increased interactions between variables) experiential methods trump experimental ones.
  • Who is collecting your data and how: Collecting data is a tedious task and the incentive to fill out surveys without having to go to the village is high, especially if no one is watching. Then there are questions of what you ask, where you ask, how you ask, what time period it is, how long the questionnaire is, etc.
  • How is the data entered and verified: Do you do random checks? Double data entry?
  • Is the data publicly available for scrutiny?

And then there is the external validity problem. Counterfactual or not, it is crucial to adapt development interventions to local contextual realities, where high quality implementation is paramount to success. Bold et al. in Scaling Up What Works: Experimental Evidence on External Validity in Kenyan Education, find that while NGO implementation of contract teachers in Kenya produces a positive effect on test scores, government implementation of the same program yielded zero effect. They cite implementation constraints and the political economy forces in play as reasons for the stark difference. In a paper entitled, using case studies to explore the external validity of ‘complex’ development interventions, Michael Woolcock argues for deploying case studies to better identify the conditions under which diverse outcomes are observed, with a focus on contextual idiosyncrasies, implementation capabilities and trajectories of change.

To top it off, today’s graduate students in economics don’t read Hirschman (some have never heard of him!) … should we be worried?

PDIA in Cameroon

written by Salimah Samji

In a recent paper entitled, Behavioral Economics and Public Sector Reform: An Accidental Experiment and Lessons from Cameroon, Gael Raballand and Anand Rajaram compare two World Bank projects in Cameroon: a $15 million, 5 year, Transparency and Accountability Capacity Development Project (TACD), and a $300,000, low-profile technical assistance project to improve performance in Cameroon customs.

The TACD story is all too familiar. It became effective one year later than expected, had only disbursed 10% after two years of implementation, and despite high level management attention and discussions with the country leadership, little changed. A mutual decision was taken to close the TACD one year prior to its close date citing poor coordination, lack of organization skills, systems in need of upgrading, and a lack of political commitment among the reasons.

The second is a ‘pockets’ of effectiveness story. The Director General of Customs requested assistance from the World Bank, to help improve performance management. The project was funded by a grant and was limited to knowledge transfer and technical assistance. In order to design the project, the team used the Bank’s in-house expertise coupled with that of a customs officer who had institutional knowledge of customs issues and understood the context as he had been an adviser to Cameroon. The pilot began with performance contracts in 2 offices for 6 months. It was focused on non-financial incentives. For good performance, congratulatory letters were placed in the files and publicly disseminated for wider recognition, and for bad behavior, team interviews, warnings, possible disciplinary action as well as removal from their position. In less than 2 months the clearance process was much faster, the attitude of the customs officers improved and revenues increased. An additional $16.5 million in revenue had been collected over the 6 months.

The second story is also a great example of PDIA principles in action.

  • Problem Driven: The problem was identified and nominated by the country. The focus was on solving problems as opposed to retro-fitting solutions. This also helps build ownership.
  • Crawl the Design Space: They carefully planned the pilot keeping the context in mind.
  • Try, Learn, Iterate, Adapt: The size and flexibility of the pilot allowed them to experiment. The tight feedback loops built-in to track the pilot allowed them to learn and adapt. They also built trust and credibility by imposing sanctions and rewarding performance.
  • Create/Maintain Authorizing Environment: The gradual buy-in of a key number of agents below the head of customs helped keep the pilot on track. This idea of multi-agent leadership is discussed in Matt Andrew’s recent paper Who Really Leads Development?

The authors conclude “the experience suggests that with institutional reforms, implementing a series of small well-designed changes may hold more hope for behavioral change than a large but ineffective reform that presumes the capacity for internal leadership of a complex reform.’ In reality, development projects involve people, who are the ultimate complex phenomena, embedded in organizations, which are complex, and organizations are embedded in rules systems (e.g. institutions, cultures, norms), which are themselves complex.

Aid and Fragility: PDIA at the UN

Earlier today, Lant Pritchett, Michael Woolcock, and Frauke de Weijer were on a panel for the Fragility and Aid: What Works? event held by the UNU-WIDER at the Permanent Mission of Germany to the UN. They discussed how even well-meaning attempts to “build capacity” could serve as techniques of persistent failure because of isomorphic mimicry (emphasis on form over function) which allows for continued dysfunction, and premature load bearing (too much too soon) which builds mistrust and cynicism whereby the donor decides on what needs to be done, but the country gets blamed for the failure, setting off a vicious cycle of bad institutions. They also discussed how PDIA might be used in fragile states.

Lant Pritchett and Michael Woolcock
Lant Pritchett and Michael Woolcock

Untying Development

Yesterday, we hosted a one-day workshop entitled, Untying Development: Promoting Governance and Government with Impact. The day brought together different voices to discuss the challenge of creating a governance agenda that focuses on solving country-specific problems, involves local people through flexible and context-fitted processes, and emphasizes learning in the reform process.

In the first session, Francis Fukuyama highlighted the need for public administration programs to shift the focus from management back to implementation. He stressed the need for more granular governance indicators and better ways to measure the implementation of government public services. The second and third sessions were focused on unleashing local agents for change, and on new practice in action. In the fourth and final session on useful evaluation, Bob Klitgaard spoke about kindling creative problem solving by using a combination of theory and examples that are Specific, Unexpected, Concrete, Credible, Emotional and Stories (the acronym SUCCES in Made to Stick). The agenda as well as the videos of the sessions can be found here.

This builds on work emerging in our Building State Capability program (including the recent book by Matt Andrews).

Hirschman told us that implementation involves a journey

written by Matt Andrews

I ran across the following quote from Hirschman today. A reminder that implementation is neither easy nor prone to scientific certainty. Rather, it requires journeys, of finding, fitting, and discovering. Do we promote such journeys in development? Are we open to the destinations we might end up reaching?

” The term “implementation” understates the complexity of the task of carrying out projects that are affected by a high degree of initial ignorance and uncertainty. Here “project implementation” may often mean in fact a long voyage of discovery in the most varied domains, from technology to politics” (Hirschman, 1967, p. 35).

Hirschman told us that implementation involves a journey