A new era for development – the future or already reality?

Guest blog by Arnaldo Pellini

Michael Woolcock, the Lead Social Development Specialist at the World Bank, gave the keynote speech at the International Conference on Best Development Practices and Policies organized by the State Ministry of National Development Planning (BAPPENAS) on August, 19-20 in Jakarta. He is one of the leading voices in the debate around new ways for approaching development problems and designing development interventions that are more flexible and context specific. He is the co-author of a well-known paper that presents an alternative approach, the Problem Driven Iterative Adaptation.

Here are some of the key insights I took from Michael Woolcock on a new era for development:

  • We are entering a new era of development which we can be called Development 2.0. Development 1.0 was concerned with technocratic reforms required to provide access to basic services such as education and health. Most countries (certainly middle income countries) have achieved that and have the infrastructures in place: policies, schools, health centers, textbooks, etc. Development 2.0 is about the state capability to make those systems work, providing good quality public services to citizens. For example, schools have been built, teachers are trained, national policies on salary and curricula are in place. So, why is it that some schools perform well and other do not perform well? Those are the questions that we need to research and answer in Development 2.0.
  • Development 2.0 is about multiple (and localized) solutions to development problems. A traditional approach in development is to find smart or best practices and scale them up through nation-wide policies. That approach has had mixed results. Development 2.0 is more about mapping out where public services work well and why as well as where service do not work well and why. Smart practice can still emerge but they can be interpreted to provide a variety of policy responses. This will require moving from the technocratic evidence and knowledge central to Development 1.0, to a more multidisciplinary approach to research and multiple types of evidence that can inform policy decisions aimed at improving service delivery.
  • Multiple sources of evidence help to deal with diversity of development problems. The new evidence generated by various types of research methods and various types of knowledge (eg, community knowledge) will produce maps of variations in service quality and delivery, which can suggest the need for multiple solutions or interventions and to draw as much as possible on local knowledge, initiatives, and various forms of capital (eg budget, social capital, human capital, etc.).
  • Development 2.0 is an era where variation and uncertainty have to be accepted and embraced. It is an era where one-size-fits-all solutions will struggle to succeed and where context specific, technically sound and politically feasible solutions can have a greater chance of success.
  • We, development practitioners and researchers have to learn to become more modest about the extent of what we can learn and, especially, what we can suggest. We can map where bureaucracies struggle and contribute with ideas that can help segments or units of the bureaucracy to provide the services that citizen expect from them. We should avoid the temptation to suggest the solution having mapped with our work only small bits of complex political and institutional realities.

At the end of the meeting I remembered the review written by Malcolm Gladwell about the biography of Albert O. Hirschman, The Gift of Doubt: ‘The economist Albert O. Hirschman […] was a “planner,” the kind of economist who conceives of grand infrastructure projects and bold schemes. But his eye was drawn to the many ways in which plans did not turn out the way they were supposed to—to unintended consequences and perverse outcomes and the puzzling fact that the shortest line between two points is often a dead end. He understood the power of failure and had the gift of doubt.’

Development 2.0 is already here. It seems to me that we should all make use of the same gift of doubt if we are to contribute to progress and innovation in building state capabilities.

Why many development initiatives have achievement gaps…and what to do about this

written by Matt Andrews

Yesterday I blogged about Hirschman’s Hiding Hand. As I interpret it, a central part of his idea is that many development projects:

  • focus on solving complex problems, and
  • only once they have started does a ‘hiding hand’ lift to show how hard the problem is to solve,
  • but because policy-makers and reformers are already en route to solving the problem they don’t turn away from the challenges, and
  • so they start getting creative and finding ways to really solve the problem. Initial plans and designs are shelved in favor of experiments with new ideas, and after much muddling the problem is solved (albeit with unforeseen or hybrid end products).

I like the argument. But why do I see so many development projects that don’t look like this?

I see projects where solutions or projects are introduced and don’t have much impact, but then they are tried again and again–with processes that don’t allow one to recognize the unforeseen challenges, and rigid designs that don’t allow one to change or experiment or pivot around constraints and limits. Instead of adjusting when the going gets tough, many development projects carry on with the proposed solution and produce whatever limited form is possible.

I think this is because many reforms are not focused on solving problems; they are rather focused on gaining short-run legitimacy (money and support) which comes through simple promises of quick solutions. This is the most rank form of isomorphism one can imagine; where one mimics purely for show… so you get a ‘fake’ that lacks the functionality of the real thing…

Let me use Public Financial Management (PFM) reforms as an example.

What problems do these reforms try to solve? Quite a few, potentially. They could try to solve problems of governments overspending, or problems of governments not using money in the most efficient and effective manner (and ensuring services are delivered), or of governments using money in ways that erode trust between the state and citizens (and more).

Now, let me ask how many reforms actually examine whether they solve these problems? Very few, actually. Mostly, reforms ask about whether a government has introduced a new multi-year budget or an integrated financial management system. Or a new law on fiscal rules, or a new procurement system.

Sometimes the reforms will ask questions about whether fiscal discipline is improved (largely because this is something outsiders like the IMF focus on) but I seldom see any reforms–or any PFM assessments (like PEFA or even the assessments of transparency) asking if services are better delivered after reforms, or if reforms enhance trust between citizens and the state. I don’t even see efforts to systematically capture information about intermediate products that might lead to these ‘solved problems’. For instance:

  • Do we have evidence that goods are procured and delivered more efficiently (time and money-wise) after reform?
  • Do we have any systematic data to show that our new human resource management systems are helping ensure that civil servants are present and working well, and that our new payment systems pay them on time (and do a better job of limiting payments to ghost workers)?
  • Do we have any consistent evidence to show that suppliers are paid more promptly after reforms?
  • Is there any effort to see if IT systems are used as we assume they will be used, after reforms?
  • Does anyone look to see if infrastructure projects are more likely to start on time and reach completion after costly project management interventions?
  • Do we have records to show that infrastructure receives proper maintenance after reform?
  • Is there any effort to see if taxpayers trust government more with their money?

This is a long list of questions (but there are many more), and I am sure that some reforms do try to capture data on some of them (if you’ve measured these in a reform, please comment as such…it would be interesting and important to know). Most reforms I have observed don’t try to do it at all, however, which was the focus of a recent discussion on the role of PFM and service delivery Time to Care About Service Delivery? Specialists from around the world were asked whether PFM reforms improve service delivery and the answer was “we think so…we expect so…we hope so…BUT WE CAN’T TELL YOU BECAUSE WE DON’T ACTUALLY ASK EXPLICIT QUESTIONS ABOUT THIS.”

My concern with this is manifold: (i) Does the failure to ask if we are solving the problems suggest that we as a community of reformers don’t really care about the problems in the first place? (ii) Does it mean that we will not be sensitive to the situations Hirschman speaks about when he discusses unforeseen challenges that undermine our ability to address problems (simply because we don’t focus on the problems)?  (iii) Does this also mean that we will not have any moments where we explore alternatives and experiment with real solutions that help to overcome hurdles en route to solving problems?

Unfortunately, I think the observations of gaps after reforms speak to all of these interpretations. And this is why many reforms and interventions do not end up solving problems. In these cases, we get the half-baked versions of the pre-planned solution…with no adjustment and no ‘solved problem’. PFM systems look better but still don’t function–so payments remain late, wages are unpaid to some and overpaid to many, services are not delivered better, and trust actually declines. Most worrying: we have spent years doing the reforms, and now need to pretend they work..and have no learning about why the problems still fester.

The solution (maybe): In my mind this can be rectified–and we can move towards producing more projects like those Hirschman observed–by

  • focusing reforms on problems, explicitly, aggressively, from the start;
  • measuring progress by looking at indicators of ‘problem solved’ (like improved levels of trust after PFM reforms) and intermediate indicators we think will get us there (better payment of contracts, more efficient procurement, etc;
  • regularly monitoring this progress;
  • being on the lookout for expected unexpecteds (things that we didn’t know about that make our initial solutions less impactful); and
  • being willing to adjust what we started with to ensure we produce real solutions to real problems–functional improvements and not just changes in form.

For more, read This is PFM which advocates a functional approach to thinking about and doing PFM reform.

Hirschman’s Hiding Hand and Problem Driven Change

written by Matt Andrews

I referred to Albert Hirschman’s work on the “Principle of the Hiding Hand” in my class today. It is a great principle, and has real application when thinking about PDIA and problem driven change.

In his essay, “The Principle of the Hiding Hand” Hirschman argues that creative solutions most frequently come from adapting to tasks that turn out to be more challenging than we expect.

In Hirschman’s words, “men engage successfully in problem-solving [when] they take up problems which they think they can solve, find them more difficult than expected, but then, being stuck with them, attack willy-nilly the unsuspected difficulties – and sometimes even succeed.”

It’s really beautiful, because it takes as a given some facts that we often think stand in the way of doing flexible, PDIA-type development. Hirschman expects that decision makers will tackle problems, often adopt solutions that look attractive but are hard to pull off (perhaps like big best practice type initiatives), and will overestimate the potential results.

He argues that they wouldn’t try to do the challenging things that development demands if they didn’t think this way. So, he advises to ‘go with it’ …. but then wait for the unexpected… in the form of complexities, constraints, hidden difficulties, etc.

When these unforseen difficulties emerge, Hirschman argues, we have the opportunity to become creative–and to iterate and experiment and find and fit ways to solve the problems that initiated the work in the first place … building on the sunk costs already incurred in pursuing the big, best practice, perfect solution. (saying something like “we’ve come so far…let’s now iterate to ensure we actually solve the problem we set out to solve.”)

Beautiful: Start where you are, focus on solving problems, try the big best practice (but hard to actually do) solution, and become creative when you hit the challenges…

What he assumes is that you have space for flexible change and PDIA-type innovation because of the sunk costs associated with past (or current) reform. An interesting assumption, that I think we can look at academically and reflect on practically.

Required and fundamentally vital reading for anyone in development.

What’s in a counterfactual?

written by Salimah Samji

I am amazed by people’s obsession with the counterfactual, and how evidence cannot exist without it. Why are people so enamored by the idea of ‘the solution’ even though we have learned time and time again that there is no one size fits all?

Is the existence of a counterfactual a sufficient condition? Why don’t people ask questions about the design and implementation of the evaluation? Specifically:

  • What are you measuring and what is the nature of your context: Where in the design space are you? Is your fitness landscape smooth or rugged? Eppstein et al. in Searching the Clinical Fitness Landscape, test two approaches (multicenter randomized control trials vs. quality improvement collaboratives where you work with others, learn from collective experience, and customize based on local context), to identify which leads to healthcare improvements. They find that the quality improvement collaboratives are most effective in the complex socio-technical environments of healthcare institutions. Basically, the moment you introduce any complexity (increased interactions between variables) experiential methods trump experimental ones.
  • Who is collecting your data and how: Collecting data is a tedious task and the incentive to fill out surveys without having to go to the village is high, especially if no one is watching. Then there are questions of what you ask, where you ask, how you ask, what time period it is, how long the questionnaire is, etc.
  • How is the data entered and verified: Do you do random checks? Double data entry?
  • Is the data publicly available for scrutiny?

And then there is the external validity problem. Counterfactual or not, it is crucial to adapt development interventions to local contextual realities, where high quality implementation is paramount to success. Bold et al. in Scaling Up What Works: Experimental Evidence on External Validity in Kenyan Education, find that while NGO implementation of contract teachers in Kenya produces a positive effect on test scores, government implementation of the same program yielded zero effect. They cite implementation constraints and the political economy forces in play as reasons for the stark difference. In a paper entitled, using case studies to explore the external validity of ‘complex’ development interventions, Michael Woolcock argues for deploying case studies to better identify the conditions under which diverse outcomes are observed, with a focus on contextual idiosyncrasies, implementation capabilities and trajectories of change.

To top it off, today’s graduate students in economics don’t read Hirschman (some have never heard of him!) … should we be worried?

Hirschman told us that implementation involves a journey

written by Matt Andrews

I ran across the following quote from Hirschman today. A reminder that implementation is neither easy nor prone to scientific certainty. Rather, it requires journeys, of finding, fitting, and discovering. Do we promote such journeys in development? Are we open to the destinations we might end up reaching?

” The term “implementation” understates the complexity of the task of carrying out projects that are affected by a high degree of initial ignorance and uncertainty. Here “project implementation” may often mean in fact a long voyage of discovery in the most varied domains, from technology to politics” (Hirschman, 1967, p. 35).

Hirschman told us that implementation involves a journey