The DDD Manifesto finds a new home

written by Salimah Samji

Since we published The DDD Manifesto on November 21, it has been viewed over 5,000 times all around the world (in 100+ countries). It currently has over 400 signatories from 60 countries. It is an eclectic community with people from bilateral organizations, multilaterals, governments, academia, NGOs, private sector, as well as independent development practitioners. These are the founding members of The DDD Manifesto Community.

Today, we are delighted to launch the online platform of the DDD Manifesto Community which is the new home of the manifesto. We hope that this will be a place where you can come to share ideas, have conversations, question your assumptions, learn from others, offer support and be inspired. It includes a forum for discussion, blog posts written by community members and features video presentations from the recent DDD workshop (#differentdev).

To sign the manifesto and to participate in the forum, you can register here. Please contribute actively – this is a community website and you are the community. 

If you want to Do Development Differently but it sounds too hard…

written by Matt Andrews

Arnaldo Pellini recently wrote an interesting personal blog post about the Doing Development Differently workshop and manifesto. He concludes with, “I agree with these ideas and  I can share and discuss these ideas with the team with whom I work  but what difference can it make if the systems around us due to organizational culture, history, circumstances, and traditions struggle to embrace flexibility, uncertainty,  untested experimentation, and slow incremental changes?”

This is an honest reflection from a practitioner in the field; and one that I hear often–from folks working in multilateral and bilateral agencies, as contractors, and beyond. It captures a concern that the development machinery (organizations, monitoring and reporting devices, profession-alliances, government counterparts, etc.) is structurally opposed to doing the kind of work one might call DDD or PDIA.

It’s like this cartoon…where our organizations say “let’s innovate but stay the same.”

Change-is-hard-430x332

I have been thinking about this a lot in the last few years, ever since I wrote chapter ten of my book…which asked whether the development community was capable of changing. In that chapter I was not especially confident but (I hope) I was still hopeful.

Since then, I think I’m more hopeful. Partly because,

  • we have found many folks in the multilaterals, bilaterals, contractors etc. who are doing development in this more flexible way. We invited a range of them to the DDD Workshop and over 330 signed on to the DDD Manifesto. One of the goals of our work in the next while is to learn from these folks about HOW they do development differently even with the constraints they face. How do they get funders to embrace uncertainty? How do they get ministers in-country to buy-into flexibility and give up on straight isomorphism?
  • I am also working on research projects that tackle this question; doing PDIA in real time, in places where development is predominantly done through the incumbent mechanisms. It is hard work, but I am finding various strategies to get buy-in to a new approach (including showing how problematic the old approach is, by working in the hardest areas where one has a counterfactual of failed past attempts, and more). I am also finding strategies to keep the process alive and buy more and more space for flexibility (by iterating tightly at first, for instance, and showing quick wins…and telling the story of learning and of increased engagement and empowerment). So far, I have not experienced complete success with what I have done, but I have certainly not struggled in getting support from the practitioners and authorisers we work with. (In my world it is harder to get support from academics, who think action research on implementation is a hobby and consultancy work… indeed, anything that does not say ‘RCT’ is considered less than academic. Sigh.)

All this is to say that I think Arnaldo is emphasizing a really important constraint on those working in development agencies. But a constraint that we should work through if we really do agree that these more problem driven, flexible approaches are what is needed. To Arnaldo and others I would suggest the following:

  • Separate the conversation about which way we should do development from the conversation about how much our organizational realities ALLOW us to do it. The first conversation is: “Should we do DDD/PDIA?” The second conversation is: “How do we DDD/PDIA?” If we conflate the conversations we never move ahead. If we separate them then we can develop strategies to gradually introduce PDIA/DDD into what we do (in essence, I’m suggesting doing PDIA ourselves, to help change the way we do development…see an earlier blog).
  • I also constantly remind myself that we (external folks in development organizations) are not the only ones facing a challenge of doing new stuff in existing contexts–with all the constraints of such. This is what we are asking of our counterparts and colleagues in the developing countries where we work. Dramatic and uncomfortable and impossible change is in the air every time we are introducing and facilitating and supporting and sponsoring work in developing countries. I always tell myself: “If we can’t work it out in our own organizations–when we think that our own organizational missions depend on such change–then we have no place asking folks in developing countries to work it out.”
  • So, it’s a challenge. But a worthy one. And if we care about doing development with impact, I think it behooves us to face up to this challenge.

Good luck, Arnaldo, thanks for your honesty and for the obvious commitment that causes you to share your reality. It is really appreciated!

The PDIA Anthem

Need help decoding the acronym PDIA? Check out the PDIA anthem.

 

This Anthem uses the Instrumental from Mos Def – Mathematics. It was made by a very talented student as part of an assignment for Matt Andrews course entitled Getting Things Done in Development. We had never imagined that we could write a song about PDIA, let alone a rap. Thank you.

Let me hear you say P. D. I. A.

Introducing The DDD Manifesto

We are delighted to release The DDD Manifesto as an outcome of the 2014 Doing Development Differently (DDD) workshop.

In late October, a group of about 40 development professionals, implementers and funders from around the world attended the DDD workshop, to share examples where real change has been achieved. These examples employ different tools but generally hold to some of the same core principles: being problem driven, iterative with lots of learning, and engaging teams and coalitions, often producing hybrid solutions that are ‘fit to context’ and politically smart.

The two-day workshop was an opportunity to share practical lessons and insights, country experience, and to experiment first hand with selected methodologies and design thinking. In order to maximize the opportunity to hear from as many people as possible, all presenters were asked to prepare a 7:30 minute talk — with no powerpoints or visual accompaniments. The workshop alone generated a rich set of cases and examples of what doing development differently looks like, available on both Harvard and ODI websites (where you can watch individual talks, see the posters or link to related reports).

The aim of the event was to build a shared community of practice, and to crystallize what we are learning about doing development differently from practical experience. The workshop ended with a strong call for developing a manifesto reflecting the common principles that cut across the cases that were presented. Watch the closing remarks here.

DDD Closing Session

These common principles have been synthesized into The DDD Manifesto. We recognize that many of these principles are not new, but we do feel the need to clearly identify principles and to state that we believe that development initiatives will have more impact if these are followed.

As an emerging community of practice, we welcome you to join us by adding your name in the comment box of the manifesto.

Contexts and Policy Implementation: 4 factors to think about

written by Matt Andrews

I recently blogged about what matters about the context. Here’s a video of a class I taught on the topic at the University of Cape Town over the summer (their winter). It is a short clip where I try to flesh out the 4 factors that I look at when thinking about new policy: 1. Disruption; 2. Strength of incumbents; 3. Legitimacy of alternatives; and 4. Agent alignment (who is behind change and who is not).

How can we learn when we don’t understand the problem?

written by Salimah Samji

Most development practitioners think that they are working on problems. However, what they often mean by the word ‘problem’ is the ‘lack of a solution’. This leads to designing typical, business as usual interventions, without addressing the actual problem. Essentially, they sell solutions to specific problems they have identified and prioritized instead of solving real and distinct problems.

If the problem identification is flawed, then it does not matter whether you do a gold standard RCT or not, you will neither solve the problem nor learn about what works. Here’s a great example. A recent paper entitled, The permanent input hypothesis: the case of textbooks and (no) student learning in Sierra Leone found that a public program providing textbooks to primary schools had no impact on student performance because the majority of books were stored rather than distributed.

Could they not have learned that the textbooks were being locked up, cheaper and faster, through some routine monitoring or audit process (which could have led to understanding why they were locked up and then perhaps trying to find other ways to improve access to the textbooks – assuming that was their goal)? Was an RCT really necessary? More importantly, what was the problem they were trying to solve? What was their causal model or theory of change? If you provide textbooks to children then learning outcomes will improve?

Interestingly, the context section of the paper mentions that “the civil war severely impacted the country’s education system leading to large-scale devastation of school infrastructure, severe shortages of teachers and teaching materials, overcrowding in many classrooms in safer areas, displacement of teachers, frequent disruptions of schooling, psychological trauma among children, poor learning outcomes, weakened institutional capacity to manage the system, and a serious lack of information and data to plan service provision.” In addition, they also found variance between regions and in one remote council, “less than 50 percent of all schools were considered to be in good condition, with almost 20 percent falling under the category “no roof, walls are heavily damaged, needs complete rehabilitation.”

Honestly, in a complex context like this, it isn’t clear or obvious that providing textbooks would make much difference even if they were handed out to the children, especially since they are written in English. Apparently, the teachers teach in Krio in the early years and then switch to English in Grade 4 and 5. Based on the context above, that sounds more like fiction than fact.

In environments like these, real problems are complex and scary, and it is easier to ignore them than to address them. A possible way forward could be to break the problem down into smaller more manageable pieces using tools like problem trees, the ishikawa diagram and the ‘5 whys.’ Then design an intervention, try, learn, iterate and adapt.

For more watch BSC video deconstructing sticky problems and problem driven sequencing.

The Chief Minister Posed Questions We Couldn’t Answer

Guest post written by Jeffrey Hammer

I was recently at a conference in Lahore, Pakistan sponsored by the International Growth Centre where the keynote address was given by Shahbaz Sharif, the Chief Minister of the province of Punjab, Pakistan (100+ million people). While fun to see old friends and colleagues, the conference was a little depressing in the way it reflected the state of the development economics profession.

The Chief Minister posed serious questions that have traditionally been the bread and butter of the economics profession. Unfortunately, we are not even trying to answer them any more. The specific question was “Should I put more money into transport? Infrastructure (power, roads, water)? Law and order? Social services? Or what? And where am I going to get the money?” What questions could be more solidly part of the core of economics than these? Unfortunately none of these were even remotely the focus of the “evidence-based” policy making discussed.

Almost all of the cases analyzed were  single, simple policy “tweaks” that were, first of all, isolated from the broader market context in which they occurred and, second, had no conception of opportunity cost – what we would have to give up to pursue these things? We had an answer to “how to improve a public food distribution system” but even with a precise answer (to whether a tweak would work) we had no idea whether the substantial amount of money funding such a system is a good idea. Maybe the Chief Minister would be better off improving education or road networks or police or rural electricity. Some of these alternative policies could have more impact on food consumption than food distribution if we thought about how the world worked. Getting food to market securely (roads, better cold storage, trustworthy police and safe roads – this is Pakistan, which no one seemed to notice) may increase food availability much more than any tunnel-visioned food program Or not – maybe the food distribution system is better. We just don’t know. And none of us “experts” are trying to find out.

On spending priorities, what we need is the old fashioned notion of opportunity cost. “Evidence” now is “did something work?” meaning did it have any effect at all? or “can we get it to work a little better?” But the real question in such a resource-constrained economy is “does it work well enough to take money away from the power plant it prevented or any other thing money could have been used for?”  Or even, “is it better than leaving the money in private hands by not collecting the taxes to pay for it?” Besides not knowing the marginal welfare cost of taxation (anyone remember that?), we forget that poor people use their money for food, so the first-order effect of tax revenues is to make poor children hungrier. Is the benefit from secondary education or bicycles or the fertilizer subsidy so good as to impose this cost on these children?  We don’t know who ultimately pays taxes (when wages, for example, respond to indirect taxation) but it is likely that poor people, the majority of the population, pay at least some substantial share. And we don’t know how badly distorted the tax system is – in its very structure, not just in its administration. The incidence and efficiency loss of the whole structure of taxation are the first order answers the Chief Minister needs. No one studies these anymore.

When someone says “we should have more “X” because we have evidence that it works”, the response should be “compared to what?” What should we cut in order to promote your particular interest? My hobby horse these days is more sanitation in South Asia. I should have to defend it against (at least) a few alternatives.

It’s not like we have no basis for making this comparison. We usually try to determine which things the private sector (i.e. almost everyone – farmers, bicycle manufacturers and repairmen, truckers, shopkeepers, halal sausage casing makers) can be safely relied upon to produce, where it goes somewhat wrong (exactly how bad are private schools or doctors?) and where it is a flaming disaster such that the government is utterly indispensable. While we’ve all drawn the gap between public and private costs (or benefits) to help us talk about optimal Pigouvian taxes, when was the last time anyone tried to measure this one, central concept for valuing interventions in developing countries? Or in developed countries, for that matter? We look at enrollment rates (or even learning rates) but never ask “how much is this secondary education worth, and how much of that isn’t captured by the student?” Further, since there is no reason to think the number is the same in any two places, even if there were a couple of such studies, it wouldn’t make up the bulk of what we call policy-relevant research. And it’s not like it’s easy to do so we can’t just say “let the practitioner-types do the (routine) calculations”. There is nothing routine to it at all.

 In the conference, several research projects measured an effect (not an externality, not a welfare loss – just an effect) that could well be part of an almost completely private good with no serious market failures to speak of. Can it really be the case that date exporters genuinely didn’t know that packaging for export was available (and wouldn’t a phone call to either the exporters or the marketing wing of the packaging producers suffice)? Did football producers really need to know a better pattern for cutting pentagons out of leather when mechanized stitching (as the commentator on that paper noted) is swiftly changing the entire production process worldwide? Will the competition that is currently mechanizing allow firms to exist even with the 10% higher profits that a better pattern enables? And are policy makers (even with Ivy League economists as their advisors) really going to make better decisions than those producers or, much more importantly, the competitive forces in the economy?

My defense of my promoting sanitation is that I contrasted the value of health via providing public goods (sewer systems in cities) to spending on publicly provided health care (a rival and excludable service – I’m avoiding the “p” word, this being the sub-continent). I don’t know if I’ve cleanly identified the effect that I purport to have measured – whether open defecation without sewage in slums damages the health of its residents – but it makes sense, is tied to most peoples’ notions of the nature of public and private goods, and gives some evidence of an externality. One reason to avoid specifying which service should be sacrificed is to avoid fights. Even fairly convincing evidence that publicly provided healthcare is of questionable value can provoke uncomfortable arguments. But not even mentioning the opportunity cost of a proposed policy is irresponsible.

On collecting more taxes: this is, of course, a core government activity. Any way we can efficiently get more money into government coffers to support critical public services is to be applauded. But what we were treated to was a two-year experiment on something that looks like tax-farming (and indeed, was titled as such). Higher powered incentives to collect taxes? When you’re being watched?  Tax inspectors didn’t know an experiment was underway? Even if it was double blind (which it was not), can a two-year project using currently recruited tax inspectors (i.e., those that entered public service expecting to get a salary without having to work too much) anticipate what happens in equilibrium when everyone figures out how to make money from these high-powered incentives? That is, core government service or not, there is a labor market in which the people who this experiment purports to study operate. It is the nature of the long-run equilibrium of that market that is the proper level of analysis for policy purposes, not the behavior of the particular individuals who happen to have the job at present. As the commentator on that paper noted, the proposal looked like the medieval version of tax farming. But that scheme always deteriorated in time (longer than a two-year experiment would tell us) into an ugly system that brought down rulers.

The Chief Minister is a committed and capable man. With the recent elections behind him, he has the opportunity to actually accomplish things. He deserves much better support than we’re giving him.

This post originally appeared as a World Bank Blog.

Rigorous Evidence Isn’t

written by Lant Pritchett

Currently, there are many statements floating around in development about the use of “rigorous evidence” in formulating policies and programs. Nearly all of these claims are fatuous. The problem is, rigorous evidence isn’t.

That is, suppose one generates some evidence about the impact of some programmatic or policy intervention in one particular context that is agreed by all to be “rigorous” because it meets methodological criteria for internal validity of its causal claims. But the instant this evidence is used in formulating policy it isn’t rigorous evidence any more.  Evidence would be “rigorous” about predicting the future impact of the adoption of a policy only if the conditions under which the policy was to be implemented were exactly the same in every relevant dimension as that under which the “rigorous” evidence was generated.  But that can never be so because neither economics—nor any other social science—have theoretically sound and empirically validated invariance laws that specify what “exactly the same” conditions would be.

So most uses of rigorous evidence aren’t.  Take, for instance, the justly famous 2007 JPE paper by Ben Olken on the impact of certain types of monitoring on certain types of corruption. According to Google Scholar as of today, this paper has been cited 637 times.  The question is, for how many of the uses of this “rigorous evidence” is it really “rigorous evidence”?  We (well, my assistant) sampled 50 of the citing papers with 57 unique mentions of Olken (2007).  Only 8 of those papers were about Indonesia (Of course even those 8 are only even arguably “rigorous” applications as they might be about different programs or different mechanisms or different contexts.)  47 of the 57 (82%) of the mentions are neither about Indonesia nor even an East Asia or Pacific country—they might be a review of the literature about corruption in general, about another country, or methodological.  We also tracked whether the words “context” or “external validity” appeared within +/- two paragraphs of the mention. In 34 of the 57 (60%) mentions, the evidence was not about Indonesia and did not mention that the results, while “rigorous” for the time, place and programmatic/policy context, have no claim to be rigorous about any other time, place, or programmatic/policy context.

Another justly famous paper, Angrist and Lavy (1999) in the QJE uses regression discontinuity to identify the impact of class size on student achievement in Israel.  This paper has been cited 1244 times.  I looked through the first 150 citations to this paper (which Google Scholar sorts by the number of times the citing paper has itself been cited) and (other than other papers by the authors) not one mentioned Israel  (not that surprisingly, as Israel is a tiny country) in the title or abstract while China, India, Bangladesh, Cambodia, Bolivia, UK, Wales, USA (various states and cities), Kenya and South Africa all figured.  Angrist and Lavy do not, and do not claim to, provide “rigorous” evidence about any of those contexts.

If one is formulating policies or programs for attacking corruption in highway procurement in Peru or reducing class size in secondary school in Thailand, it is impossible to base those policies on “rigorous evidence” as evidence that is rigorous for Indonesia or Israel isn’t rigorous for these other countries.

Now, some might make the argument that formulation of policies or programs in context X should rely exclusively/primarily/preferentially on evidence that is “rigorous” in context Z because at least we know that in context Z in which it was generated the evidence is internally valid.  This is both fatuous and false as a general proposition.

Fatuous in that no one understands the phrase “policy based on rigorous evidence” to mean “policy based on evidence that isn’t rigorous with respect to the actual policy context to which it is being applied (because there are no rigorous claims to external validity) but rather based on evidence that is rigorous in some other context.”  No one understands it that way because that isn’t rigorous evidence.

It is also false as a general proposition.  It is easy to construct plausible empirical examples in which the evidence suggests that the bias from internal validity is much (much) smaller than the bias from external validity as the contextual variation in “true” impact is much larger than the methodological bias from lack of “clean” causal identification of simple methods.  In these instances, better policy is made using “bad” (e.g. not internally valid) evidence from the same context than “rigorous” evidence from another context (e.g. Pritchett and Sandefur 2013).

Sadly perhaps, there is no shortcut around using judgment and wisdom in assessing all of the available evidence in formulating policies and programs.  Slogans like “rigorous evidence” are an abuse, not a use, of social science.