Download the new PDIA book for free

written by Salimah Samji

We are delighted to inform you that our PDIA book entitled, “Building State Capability: Evidence, Analysis, Action” was just published by Oxford University Press. The book presents an evidence-based analysis of development failures and explains how capability traps emerge and persist. It is not just a critique, it also offers a way of doing things differently. It provides you with the tools you need to personalize and apply these new ideas to your own context.

Here is a review written by Francis Fukuyama

“Building State Capability provides anyone interested in promoting development with practical advice on how to proceed—not by copying imported theoretical models, but through an iterative learning process that takes into account the messy reality of the society in question. The authors draw on their collective years of real-world experience as well as abundant data and get to what is truly the essence of the development problem.”

In keeping with our commitment to provide free resources to help diffuse our PDIA approach to practitioners around the world, we have enabled an open access title under a Creative Commons license (CC BY-NC-ND 4.0). We hope you find the book useful and that it helps create a PDIA community of change that shares, learns and grows together. Visit the book webpage to download your free copy. Please share your thoughts on social media using the hashtag #PDIABook

Listen to what the authors have to say about the book:

 

Best Practice is a Pipe Dream: The AK47 vs M16 debate and development practice

written by Lant Pritchett

At a recent holiday party I was discussing organizations and innovations with a friend of mine who teaches at the Harvard Business School about organizations and is a professor and student about technology and history.  I told him I was thinking about the lessons for the development “best practice” mantra from the AK47 versus M16 debate.  Naturally, he brought out his own versions of both for comparison, the early Colt AR-15 developed for the US Air Force (which became the M16) and an East German produced AK-47.

screen-shot-2017-01-10-at-2-10-43-pm

Development practice can learn from the AK47.  It is far and away the most widely available and used assault rifle in the world.  This is in spite of the fact that it is easy to argue that the M16 is the “best practice” assault rifle.  A key question for armies is whether in practice it is better to adopt the weapon to the soldiers you have or train the soldiers you have to the weapon you want.  The fundamental AK47 design principle is simplicity which leads to robustness in operation and effective use even by poorly trained combatants in actual combat conditions.  In contrast, the M16 is a better weapon on many dimensions—including accuracy–but only works well when used and cared for by highly trained and capable soldiers.

One important criterion for any weapon is accuracy.  In the 1980s the US military compared the AK47 versus the M16 for accuracy at various distances in proving ground conditions that isolated pure weapon accuracy.  The following chart shows the single shot probabilities of hitting a standard silhouette target at various distances in proving ground conditions.  It would be easy to use this chart to argue that the M16 is a “best practice” weapon as at middle to long distances the single shot hit probability is 20 percent higher.

Figure 1:  At proving ground conditions the AK47 is a less accurate weapon than the M16A1 at distances above 200 yards

screen-shot-2017-01-10-at-2-11-05-pmSource:  Table 4.3, Weaver 1990.

The study though also estimates the probability of hitting a target when there are aiming errors of an actual user of the weapon.  In “rifle qualifying” conditions the shooter is under no time or other stress in shooting and knows the distance to target and hence ideal conditions for shooter to demonstrate high capacity.  In “worst field experience” conditions the shooter is under high or combat-like stress, although obviously these data are from simulations of stress as it is impossible to collect reliable data from actual combat.

It is obvious in Figure 2 that over most of range at which assault rifles are used in combat essentially all of the likelihood of missing the target comes from shooter performance and almost none from the intrinsic accuracy of the weapon.  The M16 maintains a proving ground conditions hit probability of 98 percent out to 400 yards but at 400 yards even a trained marksman in zero stress conditions has only a 35 percent chance and under stress this is only 7 percent.

Figure 2:  The intrinsic accuracy of the weapon as assessed on the proving is not a significant constraint to shooter accuracy under high stress conditions of shooting

screen-shot-2017-01-10-at-2-14-11-pm

Source:  Table 4.2, Weaver 1990.

At 200 yards we can decompose the difference from the ideal conditions of “best practice”–the M16 on the proving ground has 100 percent hit probability—and the contribution of a less accurate weapon, user capacity even in ideal conditions, and user performance under stress.  The AK47 is 99 percent accurate, but in in rifle qualifying conditions the hit probability is only 64 percent with the M16 and in stressed situations only 12 percent with the M16.  So if a shooter misses with an AK47 at 200 yards in combat conditions it is almost certainly due to the user and not the weapon. As the author puts it (it what appears to be military use of irony) while there are demonstrable differences in weapon accuracy they are not really an issue in actual use conditions by actual soldiers:

It is not unusual for differences to be found in the intrinsic, technical performance of different weapons measured at a proving ground.  It is almost certain that these differences will not have any operational significance.  It is true, however that the differences in the…rifles shown…are large enough to be a concern to a championship caliber, competition shooter.

Figure 3:  Decomposing the probability of a miss into that due to weapon accuracy (M16 vs AK47), user capacity in ideal conditions, and operational stress

screen-shot-2017-01-10-at-2-15-25-pm

Source:  Figure 2 above, Weaver 1990.

The AK-47’s limitations in intrinsic accuracy appear to be a technological trade-off and an irremediable consequence of the commitment to design simplicity and operational robustness   The design of the weapon has very loose tolerances which means that the gun can be abused in a variety of ways and not properly maintained and yet still fire with high reliability but this does limit accuracy (although the design successor to the AK-47, the currently adopted AK-74 did address accuracy issues).  But a weapon that fires always has higher combat effectiveness than a weapon that doesn’t.

While many would argue that the M16 in the hands of a highly trained professional soldier is a superior weapon, this does require training and adapting the soldier and his practices to the weapon.  The entire philosophy of the AK-47 is to design and adapt the weapon to soldiers who likely have little or no formal education and who are expected to be conscripted and put into battle with little training.  While it is impossible to separate out geopolitics from weapon choice, estimates are that 106 countries’ military or special forces use the AK-47—not to mention its widespread use by informal armed groups—which is a testament to its being adapted to the needs and capabilities of the user.

Application of ideas to basic education in Africa

Now it might seem odd, or even insensitive, to use the analogy of weapon choice to discuss development practice, but the relative importance of (a) latest “best practice” technology or program design or policy versus (b) user capacity versus (c) actual user performance under real world stress as avenues for performance improvement arises again and again in multiple contexts.  There is a powerful, often overwhelming, temptation for experts from developed countries to market the latest of what they know and do as “best practice” in their own conditions without adequate consideration of whether this is actually addressing actual performance in context.

The latest Service Delivery Indicators data that the World Bank has created for several countries in Sub-Saharan Africa illustrate these same issues in basic education.

The first issue is “user capacity in ideal conditions”—that is, do teachers actually know the material they are supposed to teach?  The grade 4 teachers were administered questions from the grade 4 curriculum.  On average only 12.7 percent of teachers scored above 80 percent correct (and this is biased upward by Kenya’s 34 percent as four of six countries’ teachers were at 10 percent or below).  In Mozambique only 65 percent of mathematics teachers could do double digit subtraction with whole numbers (e.g. 86-55) and only 39 percent do subtraction with decimals—and less than 1 percent of teachers scored above 80 percent.

Figure 4:  Teachers in many African countries do not master the material they are intended to teach—only 13 percent of grade 4 teachers score above 80 percent on the grade 4 subject matter

screen-shot-2017-01-10-at-2-15-54-pm

Source:  Filmer (2015) at  RISE conference.

A comparison of the STEP assessment with the PIAAC assessment of literacy in the OPECD found that the typical tertiary graduate in Ghana or Kenya has lower literacy proficiency then the typical OECD adult who did not finish high school.  A comparison of the performance on TIMSS items finds that teachers in African countries like Malawi and Zambia score about the same as grade 7 and 8 students in typical OECD countries like Belgium.

So, even in ideal conditions in which teachers were present and operating at their maximum capacity their performance would be limited by the fact that they themselves do not fully master the subject matters at the level they are intended to teach it.

The second issue is the performance under “operational stress”—which includes both the stresses of life that might lead teachers to not even reach school on any given day as well as the administrative and other stresses that might lead teachers to do less than their ideal capacity.  The Service Delivery Indicators measure actual time teaching versus the deficits due to absence from the school and lack of presence in the classroom when at the school.  The finding is that while the “ideal” teaching/learning time per day is five and a half hours students are actually only exposed to about 3 hours a day of teaching/learning time on average.  In Mozambique the learning time was only an hour and forty minutes a day rather than the official (notional) instructional time of four hours and seventeen minutes.

On top of this pure absence the question is whether under the actual pressure and stress of classrooms even the teaching/learning time is spent at the maximum of the teacher’s subject matter and pedagogical practice capacity.

Figure 5:  Actual teaching/learning time is reduced dramatically by teacher absence from school and classroom

screen-shot-2017-01-10-at-2-18-15-pm

Source:  Filmer (2015) at RISE conference

The “global best practice” versus performance priority mismatch

The range in public sector organizational performance and outcomes across the countries of the world is vast in nearly every domain—education, health, policing, tax collection, environmental regulation (and yes, military).  In some very simple “logistical” tasks there has been convergence (e.g. vaccination rates and vaccination efficacy are very high even in many very low capability countries) but the gap in more “implementation intensive” tasks is enormous.  Measures of early grade child mastery of reading range from almost universal—only 1 percent of Philippine 3rd graders cannot read a single word of linked text whereas 70 percent of Ugandan 3rd graders cannot read at all.

This means that in high performing systems the research questions are pushed to the frontiers of “best practice” both in technology and the applied research of management and operations.  There is no research or application of knowledge in improving performance in tasks that are done well and routinely in actual operational conditions by most or nearly all service providers.  That is taken for granted and not a subject of active interest.  There is research interest in improving the frontier of possibility and interest in practical research into how to increase the capacity of the typical service provider and their performance under actual stressed conditions—but in high performing systems these are both aimed at expanding the frontier of actual and achieved practice in the more difficult tasks.  This learning may be completely irrelevant to what is the priority in low performing systems.  Worse, attempts to transplant “best practice” in technology or organizations or capacity building that is a mismatch for actual capacity or cannot be implemented in the current conditions may lead to distracting national elites from their own conditions and priorities.

What are the lessons of the “best practice” successes of the Finnish schooling system for  Pakistan or Indonesia or Peru?  What are the lessons of  Norway’s “best practice” oil revenue stabilization fund for Nigeria or South Sudan?  What are the lessons of OECD “best practice” for budget and public financial management for Uganda or Nepal?  I am confident there are interesting and relevant lessons to learn, but the experience of the AK-47 should give some pause as to whether a globally relevant “best practice” isn’t a pipe dream.

Figure 6:  Potential mismatch of global “best practice” and research performance priorities in low performance systems.

screen-shot-2017-01-10-at-2-24-01-pm

Thomas C. Schelling’s Contributions to Policy Analysis

Guest blog by Robert Klitgaard

Thomas C. Schelling has been rightly lionized for his contributions in economics, international security, and the transdisciplinary field of game theory. He was also a pioneer in policy analysis. In this note, I want to reflect on what Schelling can teach us about doing policy research.

Though a theorist, he was fascinated by real examples and found them indispensable for developing theory. “In my own thinking,” Schelling wrote in the preface to The Strategy of Conflict (1960, p. vi), “they have never been separate. Motivation for the purer theory came almost exclusively from preoccupation with (and fascination with) ‘applied’ problems; and the clarification of theoretical ideas was absolutely dependent on an identification of live examples.”[1]

This passion led him to topics ranging from foreign aid and international economics to diplomacy, war, and terrorism, from crime to altruism, from collective action to the nature of the self. In the long, discussion-paper version of his “Hockey Helmets” essay (1972), an index shows readers where to locate the many examples he uses along the way because, Schelling noted, they are what many readers most want to find.

Schelling unpacked concepts, rebutted simplistic solutions, and expanded the range of alternatives. “I am drawing a distinction, not a conclusion,” he wrote, prototypically, in an article on organizations. In this piece he distinguished exercising from defining responsibility, standards that impose costs from those that do not, costs arising from an act from those prompted by the fear of that act, wanting to do the right thing from figuring out what the right thing is, discouraging what is wrong from doing what is right, and the firms of economic abstraction from businesses as “small societies comprising many people with different interests, opportunities, information, motivations, and group interests.” Regarding an organization, he noted, “It may be important to know who’s in charge, and it may be as difficult as it is important” (1974, pp. 82, 30, 83).

Policy analysis à la Schelling means analysis that enriches. Through a combination of simplifying theory and elegant example, he forces us to realize that there are not one or two but a multiplicity of, say, military strengths, public goods, types of discrimination, nonviolent behaviors, actions that affect others, ways to value a human life. “My conjectures,” he said of his analysis of various kinds of organized crimes, “may at least help to alert investigators to what they should be looking for; unless one raises the right questions, no amount of hearings and inquiries and investigations will turn up the pertinent answers” (1971, p. 649). Not for him normal science’s quantitative demonstration that a qualitative point from simplifying theory cannot be rejected at the usual level of significance.

And not for him the policy recommendation of what might be called, “normal policy analysis.” Schelling was after enriching principles, and “principles rarely lead straight to policies; policies depend on values and purposes, predictions and estimates, and must usually reflect the relative weight of conflicting principles” (1966, p. vii).

In a little-known essay, Schelling reviewed “the non-accomplishments of policy analysis” in fields from defense to energy to health to education. Policy analysis as customarily practiced has made so little difference because the usual paradigm is wrong.

If policy analysis is the science of rational choice among alternatives, it is dependent on another more imaginative activity—the invention of alternatives worth considering …

The point I would make is that policy analysis may be doomed to inconsequentiality as long as it is thought of within the paradigm of rational choice…

[P]olicy analysis may be most effective when it is viewed within a paradigm of conflict, rather than of rational choice … Analyzing the interests and the participants may be as important as analyzing the issue. Selecting the alternatives to be compared, and selecting the emphasis to be placed on the criteria for evaluation may be what matters, and the creative use of darkness may be as much needed as the judicious use of light. (1985, pp. 27-28)

What is the paradigm of policy analysis that Schelling rejected? Analysts are given the objectives, alternative actions, and perhaps constraints. The analysts then assess the likely effects of the various actions. They calculate which alternative maximizes the objectives, and from this they derive a prescription for action.

This rejected paradigm conceives of the analytical problem as the leap from givens to prescriptions, from the “if” to the “then”. This conception borrows from economics. Under idealized assumptions, economic science is able to derive powerful statements about optimal courses of action. Seduced, the analyst may accept a lot of unrealistic restrictions on the “if” for the thrill of an unassailable “then”. But as Schelling pointed out, in real policy making the intellectual problem is often a different one: how to discover, how to be more creative about, the objectives, the alternatives, and the constraints. In other words, how to understand, expand, and enrich the “if.”

The rejected paradigm says that the policy maker’s problem is deciding among many given courses of action. Schelling’s version turned this around. The problem is understanding, indeed generating, the objectives and the range of alternatives. Once policymakers have done that, they usually do well at making decisions. They are already pretty good at the “then” part; they may need help on the “if.”

On this view, policy analysis provides not so much a set of answers that politicians should adopt and bureaucrats implement, but a set of tools and examples for enriching the appreciation of alternatives and their consequences.

This conception of policy analysis has another implication that has to do with the lamentable reluctance of politicians to adopt and bureaucrats to implement the excellent advice of policy analysts. Under the standard paradigm, it is at first baffling why one’s optimal advice is not pursued—until one notes that, unlike oneself, policymakers and bureaucrats have selfish agendas. Aha.

But to the policy analyst clued in by Thomas C. Schelling, the resistance of politicians and functionaries may mean more. Politicians’ resistance may be a sign that the analyst does not understand the operative “objective functions.” Bureaucrats’ resistance may indicate that the analyst has more to learn about the alternatives and constraints. In most real policy problems, the objectives, alternatives, and constraints are not “given.”

So, when confronted with the apparently stupid or self-serving reluctance of the real world to heed our advice, we should listen carefully and learn. The words and actions of the politicians and the bureaucrats may provide invaluable clues for appreciating what the objectives and alternatives really are and might be. And, after listening, our task as analysts is to use theoretical tools and practical examples to expand and enrich their thinking about objectives, alternatives, and consequences. At least part of the failure of standard policy analysis to make a difference stems from the way many analysts conceive of “answers” in public policy.

Schelling’s style was as distinct as his enriching objective. His papers are essays in the first person, packed with care and taste and touches of humor.

Sometimes promises are enforced by a deity from which broken promises cannot be hidden. “Certain offenses which human law could not punish nor man detect were placed under divine sanction,” according to Kitto. “Perjury is an offense which it may be impossible to prove; therefore it is one which is particularly abhorrent to the gods.” This is a shrewd coupling of economics with deterrence: if divine intervention is scarce, economize it by exploiting the comparative advantage of the gods. If their intelligence system is better than that of the jurists, given them jurisdiction over the crimes that are hardest to detect. The broken promises that are hardest to detect may, like perjury, fall under their jurisdiction. But be careful not to go partners with anyone who does not share your gods. (1989, p. 118)

Stylistically as well as substantively, Schelling recast the predominant paradigm of policy analysis. He was an enricher of the “if,” a catalyst for one’s own creativity. In what he wrote and how, he was aware of the importance of intangibles like perceptions, inclinations, and will—in the policy maker and in the reader as well.[2] Policy analysis in the Schelling style tries to unpack the concept under discussion, even an emotively loaded one; one disaggregates and reclassifies. One approaches a sensitive subject by highlighting not the moral failures of individuals but the structural failures of information and incentives. One uses a simplifying theory to obtain, not an optimizing model under restrictive assumptions, but a framework that stimulates the creativity of policy-makers and managers in their varied and unique circumstances.


[1] An earlier classic on the strategy of conflict contained a similar sentiment: “Just as some plants bear fruit only if they don’t shoot up too high, so in the practical arts the leaves and flowers of theory must be pruned and the plant kept close to its proper soil—experience” (Clausewitz 1976, p. 61).

[2] A military example of this theme: “[W]e are necessarily dealing with the enemy’s intentions—his expectations, his incentives, and the guesses that he makes about our intentions and our expectations and our incentives … This is why so many of the estimates we need for dealing with these problems relate to intangibles. The problem involves intangibles. In particular, it involves the great intangible of what the enemy thinks we think he is going to do” (Schelling 1964, p. 216)

References:

Clausewitz, Carl von. 1976. On War, ed. and trans. Michael Howard and Peter Parfet, Princeton, NJ: Princeton University Press.

Schelling, Thomas C. 1960. The Strategy of Conflict, Cambridge, MA: Harvard University Press.

Schelling, Thomas C. 1964. “Assumptions about Enemy Behavior.” In Analysis for Military Decisions, E.S. Quade, ed., Santa Monica: The RAND Corporation.

Schelling, Thomas C. 1966. Arms and Influence, New Haven, CT: Yale University Press.

Schelling, Thomas C. 1971. “What Is the Business of Organized Crime?” The American Scholar, 40, 4, Autumn.

Schelling, Thomas C. 1972. “Hockey Helmets, Concealed Weapons and Daylight Saving: A Study of Binary Choices with Externalities,” Discussion Paper No. 9, Cambridge, MA: Kennedy School of Government.

Schelling, Thomas C. 1974. “Command and Control,” in Social Responsibility and the Business Predicament, James w. McKie, ed., Washington, D.C.: The Brookings Institution.

Schelling, Thomas C. 1985. “Policy Analysis as a Science of Choice,” in Public Policy and Policy Analysis in India, R.S. Ganapathy et al., eds., New Delhi, Sage.

Schelling. Thomas C. 1989. “Promises,” Negotiation Journal, 5, no. 2, April.

Doing development differently: two years on, what are we learning?

On 17 November 2016, ODI, in collaboration with the Building State Capability program at Harvard University, convened a private workshop bringing together a number of actors from academia, civil society, and donors, to look at how the adaptive development agenda has been put into practice throughout the world. We attempted to draw out some lessons learnt, and chart a way forward for both actors already working in this space, and for actors new to and interested in how to do development differently. You can view the agenda here.

Please find the videos below:

You can also view Duncan Green’s post on the event here.

 

State Capability Matters

written by Lant Pritchett

The Social Progress Index is a new attempt to gauge human well-being across countries that does not rely on standard measures like GDP per capita but rather builds and index of Social Progress from the ground up.  The Social Progress Index is an overall measure and then is divided into three measures:  Basic Human Needs, Foundations of Well-being, and Opportunity. 

The Building State Capability program focuses on new approaches to build the capability of governments and its organizations to carry out its functions—from making and implementing a budget to regulating the environment to maintaining law and order to educating children.

A set of natural questions to ask are:

  • Do countries with more government capability have higher levels of social progress?
  • Does the positive association of government capability and the various measures persist after also controlling for a countries GDP per capita and the extent of democracy?
  • How big is this connection?

The answers are:

  • Yes.
  • Yes.
  • Very big.

Table 1 reports a simple OLS multi-variate regression of the Social Progress Index and its three main components on (natural log) GDP per capita, one measure of state capability, the World Governance Indicator measure of Government Effectiveness, and the Polity IV measure of autocracy to democracy.  All of these are rescaled to 0 (the lowest country) to 100 (the highest country) so that the coefficients across the indicators can be compared.  In this scaling the regression coefficients say that a 1 unit change in say, WGI Government Effectiveness, is associated with a .39 unit change in the Social Progress Index or a .53 point change in the Opportunity index.

Table 1:  State capability matters for well-being

 

Main indices of Social Progress Index and its three components

(all rescaled to 0 to 100)

  ln(GDPPC)

(rescaled)

World Governance Indicators

Gov’t Effectiveness

(rescaled)

Polity

(rescaled)

R2
Social Progress Index Coefficients 0.50 0.39 0.13 0.92
t-stats 12.26 7.96 5.41
Basic Human Needs Coefficients 0.69 0.26 0.00 0.82
t-stats 11.06 3.48 0.05
Foundations of Wellbeing Coefficients 0.56 0.38 0.18 0.86
t-stats 8.46 5.81 5.90
Opportunity Coefficients 0.29 0.53 0.25 0.87
t-stats 4.68 8.91 8.67

The two questions are answered in Table 1 as these regressions say that, for a country of the same GDP per capita and with the same rating on democracy, an improvement in state capability is associated with large improvements in all four indicators of well-being (And these estimates are precise so that we can confidently reject that any of them are zero).

These affects are big, which can be illustrated in two ways.

  • First, there is a massive literature on the connection between GDP per capita (which measures average productivity of a country and hence is a crude indicator of the material basis available) and various indicators of well-being.  This literature tends to find very powerful correlations.  So it is interesting that improvements in government effectiveness are nearly as large as those in GDP per capita.  A one unit improvement in (ln) GDPPC is associated with SPI higher by .5 points and WGI-GE with .39 points (so 80 percent as large).  Interestingly, the impact of state capability is consistently and substantially larger than of POLITY’s rating of democracy.
  • Second, Figure 1 shows the association between the WGI government effectiveness measure and SPI, after controlling for GDP per capita and the POLITY rating of democracy.  This says “for countries with the same GDPC and POLITY how much higher would be expect SPI to be if government effectiveness were higher?”  As the graph shows moving from Venezuela’s capability (which is low for its GDP and POLITY) to Rwanda (which is high for its GDPPC and POLITY) would improve the Social Progress Index by over 20 points (which is the raw gap between say, Bangladesh (37) and the Dominican Republic (59) or between Indonesia (53) and Israel (75).

Of course this kind of data cannot resolve questions of cause and effect (as perhaps social progress or its components lead to greater state capability) but, to the extent these associations reflect a causal impact of state capability on well-being these are impressively large impacts and highlight the need for more attention to understanding not just how to promote economic growth but also how to build the capability of the state and its organizations.

PDIA Notes 2: Learning to Learn

written by Peter Harrington

After over two years of working with the government of Albania, and as we embark on a new project to work with the government of Sri Lanka, we at the Building State Capability program (BSC) have been learning a lot about doing PDIA in practice.

Lessons have been big and small, practical and theoretical – an emerging body of observations and experience that is constantly informing our work. Amongst other things, we are finding that teams are proving an effective vehicle for tackling problems. We have found that a lot of structure and regular, tight loops of iteration are helping teams reflect and learn. We have found that it is vital to engage with several levels of the bureaucracy at the same time to ensure a stable authorising space for new things to happen. This all amounts to a sort of ‘thick engagement’, where little-and-often type interaction, woven in at many levels, bears more fruit than big set-piece interventions.

Each of these lessons are deserving of deeper exploration in their own right, and we will do so in subsequent posts. For now, I want to draw out some reflections about the real goal of our work, and our theory of change.

In the capacity-building arena, the latest wisdom holds that the best learning comes from doing. We think this is right. Capacity building models that rely purely on workshop or classroom settings and interactions are less effective in creating new know-how than interventions that work alongside officials on real projects, allowing them to learn by working on the job. Many organisations working in the development space now explicitly incorporate this into their methodology, and in so doing promise to ensure delivery of something important alongside the capacity building (think of external organizations that offer assistance in delivery, often by placing advisers into government departments, and promise to ensure a certain goal is achieved and the government capacity to deliver is also enhanced).

It sounds like a win-win (building capabilities while achieving delivery). The problem is that, in practice, when the implementers in the governments inevitably wobble, or get distracted, or pulled off the project by an unsupportive boss (or whatever happens to undermine the process, as has probably happened many times before), the external advisors end up intervening to get the thing done, because that’s what was promised, what the funder often cares more about, and what is measurable.

When that happens, the learning stops. And the idea of learning by doing stops, because the rescue work by external actors signalled that learning by doing—and failing, at least partially, in the process—was at best a secondary objective (and maybe not even a serious one). Think about anything you have ever learned in your life – whether at school or as an adult. If you knew someone was standing by to catch a dropped ball, or in practice was doing most of the legwork, would you have really learned anything? For the institutions where we work, although the deliverable may have been delivered, when the engagement expires, nothing will have changed in the way the institution works in the long run. This applies equally, by the way, to any institution or learning process, anywhere in the world.

The riddle here is this: what really makes things change and improve in an institution, such that delivery is enhanced and capability to deliver is strengthened? The answer is complex, but it boils down to people in the context doing things differently – being empowered to find out what different is and actually pursue it themselves.

In pursuing this answer, we regularly deploy the concept of ‘positive deviance’ in our work: successful behaviors or strategies enabling people to find better solutions to a problem than their peers, despite facing similar challenges and having no extra resources or knowledge than their peers. Such people are everywhere, sometimes succeeding, and depending on the conditions sometimes failing, to change the way things work – either through their own force of will, or by modelling something different. Methods to find and empower positive deviants within a community have existed for many years. But what if, by cultivating a habit of self-directed work and problem solving, it was possible to not just discover positive deviants but create new ones?

Doing things differently stems from thinking differently, and you only think differently when you learn – it’s more or less the definition of learning. Looked at this way, learning becomes the sine qua non of institutional change. It may not be sufficient on its own – structures, systems and processes still matter – but without a change in paradigm among a critical mass of deviants, those other things (which are the stuff of more traditional interventions) will always teeter on the brink of isomorphism.

We believe that positive deviance comes from learning, especially learning in a self-directed way, and learning about things that matter to the people doing them. If you can catalyse this kind of learning in individuals, you create a different kind of agency for change. If you can go beyond this and catalyse this kind of learnings in groups of individuals within an institution or set of institutions, and create a sufficiently strong holding space for their positive deviance to fertilise and affect others, then gradually whole systems can change. In fact, I’d be surprised if there’s any other way that it happens. As Margaret Mead put it, “Never doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.”

This is our theory of change. The methods we use – particularly the structured 6-month intensive learning and action workshop we call Launchpad – are trying above all to accelerate this learning by creating a safe space in which to experiment, teach ideas and methods that disrupt the status quo, and create new team dynamics and work habits among bureaucrats. By working with senior and political leaders at the same time, we are trying to foster different management habits, to help prevent positive deviance being stamped out. In doing all this, the goal is to cultivate individuals, teams, departments and ultimately institutions that have a habit of learning – which is what equips them to adapt and solve their own problems.

This does not mean that the model is necessarily better at achieving project delivery than other methods out there, although so far it has been effective at that too. The difference is that we are willing to let individuals or even teams fail to deliver, because it is critical for the learning, and without learning there is no change in the long term. Doing this is sometimes frustrating and costly, and certainly requires us gritting our teeth and not intervening, but what we see so often is agents and groups of agents working their way out of tricky situations with better ideas and performance than when they went in. They are more empowered and capable to provide the agency needed for their countries’ development. This is the goal, and it can be achieved.

 

 

PDIA: It doesn’t matter what you call it, it matters that you do it

written by Matt Andrews

It is nearly two years since we at the Building State Capability (BSC) program combined with various other groups across the developing world to create an umbrella movement called Doing Development Differently (DDD). The new acronym was meant to provide a convening body for all those entities and people trying to use new methods to achieve development’s goals. We were part of this group with our own approach, which many know as Problem Driven Iterative Adaptation (PDIA). 

Interestingly, a few of the DDD folks thought we should change this acronym and call PDIA something fresher, cooler, and more interesting; it was too clunky, they said, to ever really catch on, and needed to be called something like ‘agile’ or ‘lean’ (to borrow from approaches we see as influential cousins in the private domain).

The DDD movement has grown quite a bit in the last few years, with many donor organizations embracing the acronym in its work, and some even advocating for doing PDIA in their projects and interventions. A number of aid organizations and development consultancies have developed other, fresher terms to represent their approaches to DDD as well; the most common word we see now is ‘adaptive’, with various organizations creating ‘adaptive’ units or drawing up processes for doing ‘adaptive’ work.

‘Adaptive programming’ certainly rolls off the tongue easier than ‘Problem Driven Iterative Adaptation’!

Some have asked me why we don’t change our approach to call it Adaptive as well, others have asked where we have been while all the discussions about names and titles and acronyms have been going on, and while organizations in the aid world have been developing proposals for adaptive projects and the like (some of which are now turned into large tenders for consulting opportunities).  My answer is simple: I’ve made peace with the fact that we are much more interested in trying to work out how to do this work in the places it is needed the most (in implementing entities within governments that struggle to get stuff done). 

So, we have been working out how to do our PDIA work (where the acronym really reflects what we believe—that complex issues in development can only be addressed through problem driven, iterative, and adaptive processes). Our observation, from taking an action research approach to over twenty policy and reform engagements, a light-touch teaching intervention with over 40 government officials, an online teaching program, and more, is clear: the people we work with (and who actually do the work) in governments don’t really care for the catchy name or acronym, or if PDIA is clunky or novel or old and mainstream. The people we are working with are simply interested in finding help: to empower their organizations by building real capability through the successful achievement of results.

We thus seldom even talk about PDIA, or adaptive programming, or DDD, or agile or lean, or whatever else we talk about in our classrooms and seminar venues back at Harvard (and in many of our blog posts and tweets). Indeed, we find that a key dimension of this work—that makes it work—is not being flashy and cool and cutting edge. It’s about being real and applied, and organic, and relational. And actually quite nodescript and mundane; even boringly focused on helping people do the everyday things that have eluded them.

So, PDIA’s lack of ‘flash’ and ‘coolness’ may be its greatest asset (and one we never knew about), because it does not matter what PDIA is called…what matters is whether it is being done.

PDIA Notes 1: How we have PDIA’d PDIA in the last five years

written by Matt Andrews, co-Founding Faculty of the Building State Capability Program

We at the Building State Capability (BSC) program have been working on PDIA experiments for five years now. These experiments have been designed to help us learn how to facilitate problem driven, iterative and adaptive work. We have learned a lot from them, and will be sharing our lessons—some happy, some frustrating, some still so nuanced and ambiguous that we need more learning, and some clear— through a series of blog posts.

Before we share, however, I wanted to clarify some basic information about who we are and what we do, and especially what our work involves. Let me do this by describing what our experiments look like, starting with listing the characteristics that each experiment shares:

  • We have used the PDIA principles in all cases (engaging local authorizers to nominate their own problems for attention, and their own teams, and then working on solving the problems through tight iterations and with lots of feedback).
  • We work with and through teams of individuals who reside in the context and who are responsible for addressing the problems being targeted. These people are the ones who do the hard work, and who do the learning, and who get the credit for whatever comes out of the process.
  • We work with government teams only, given our focus on building capable states. (We do not believe that one can always replace failed or failing administrative and political bodies with private or non profit contractors or operators. Rather, one should address the cause of failure and build capability where it does not exist).
  • We believe in building capability through experiential learning and the failure and success such brings (choosing to institutionalize solutions only after lessons have been learned about what works and why, instead of institutionalizing solutions that imply ex ante knowledge of what works in places where such knowledge does not exist).
  • We work with real problems and focus on real results (defined as ‘problem solved’, not ‘solution introduced’) in order focus the work and motivate the process (to authorizers and to teams involved in doing the work).
  • We—the BSC team affiliated with Harvard—see ourselves as external facilitators of a process, and do not do the substantive work of delivery—even if the results look like they won’t come. Our primary focus is on fostering learning and coaching teams to do things differently and more effectively; we have seen too many external consultants rescuing a delivery failure once and undermining local ownership of the process and the emphasis on building local capability to succeed.

This set of principles has underpinned our experimental work in a variety of countries and sectors, where governments have been struggling to get things done. We have worked in places like Mozambique, South Africa, Liberia, Albania, Jamaica, Oman, and now Sri Lanka. We have worked with teams focused on justice reform, health reform, agriculture policy, industrial policy, export promotion, investor engagement, low-income housing, tourism promotion, municipal management, oil and electricity sector issues, and much more.

These engagements have taken different shapes—as we vary approaches to learn internally about how to do this kind of work most effectively, and how to adapt mechanisms to different contexts and opportunities:

  • In some instances, we have been the direct conveners of teams of individuals, whereas we have relied on authorizers in countries to act as conveners in other contexts, and in some interactions we have worked with individuals only—and relied on these individuals to act as conveners in their own contexts.
  • Some of our work has involved extremely regular and tangible interaction from our side—with our facilitators engaging at least every two or three weeks with teams—and other work has seen a much less regular, or a more light touch interaction (not meeting every two weeks, or engaging only be phone every two weeks, or structuring interactions between peers involved in the work rather than having ourselves as the touch point).
  • We have used classroom structures in some engagements, where teams are convened in a neutral space and work as if in a classroom setting for key points of the process (the initial framing of the work and meetings at major milestones every six weeks or so), but in other contexts we work strictly in the environments of the teams, and in a more ‘workplace-driven’ structure. In other instances, we have relied almost completely on remote correspondence (through online course engagements, for instance).

There are other variations in the experiments, all intended to help us learn from experience about what works and why. The experiments have yielded many lessons, and humbled us as well: Some of these experiments have become multi-year interactions where we see people being empowered to do things differently, but others have not even gotten out of the starting blocks, for instance. Both experiences humble us for different reasons.

This work is truly the most exciting and time consuming thing I have ever done, but is also—I feel deeply—the most important work I could be doing in development. It has made my sense of what we need in development clearer and clearer. I hope you also benefit in this was as we share our experiences in coming blog posts.

 

The “PDIA: Notes from the Real World” blog series

written by Salimah Samji

We are delighted to announce our new PDIA: Notes from the Real World blog series. In this series we will share our lessons from our PDIA experiments over the past five years, on how to facilitate problem driven, iterative and adaptive work . We will also feature some guest blog posts from others who are experimenting and learning from PDIA. We hope you will join us on this learning adventure!

Read the first blog post written by Matt Andrews here.

SearchFrames for Adaptive Work (More Logical than Logframes)

written by Matt Andrews

Although the benefits of experimental iteration in a PDIA process seem very apparent to most people we work with, we often hear that many development organizations make it difficult for staff to pursue such approaches, given the rigidity of logframe and other linear planning methods. We often hear that funding organizations demand the structured, perceived certainty of a logframe-type device and will not allow projects to be too adaptive.

In response to this concern, we propose a new logframe-type mechanism that embeds experimental iteration into a structured approach to make policy or reform decisions in the face of complex challenges. Called the SearchFrame, it is shown in the Figure below (and discussed in the following working paper, which also offers ideas on using the tool).

SearchFrame

The SearchFrame facilitates a transition from problem analysis (core to PDIA) into a structured process of finding and fitting solutions (read more about ‘Doing Problem Driven Work’). An aspirational goal is included as the end point of the intervention, where one would record details of ‘what the problem looks like solved’. Beyond this, key intervening focal points are also included, based on the deconstruction and sequencing analyses of the problem. These focal points reflect what the reform or policy intervention aims to achieve at different points along the path towards solving the overall problem. More detail will be provided for the early focal points, given that we know with some certainty what we need and how we expect to get there. These are the focal points driving the action steps in early iterations, and they need to be set in a defined and meaningful manner (as they shape accountability for action). The other focal points (2 and 3 in the figure) will reflect what we assume or expect or hope will follow. These will not be rigid, given that there are many more underlying assumptions, but they will provide a directionality in the policymaking and reform process that gives funders and authorizers a clear view of the intentional direction of the work.

The SearchFrame does not specify every action step that will be taken, as a typical logframe would. Instead, it schedules a prospective number of iterations between focal points (which one could also relate to a certain period of time). Funders and authorizers are thus informed that the work will involve a minimum number of iterations in a specific period. Only the first iteration is detailed, with specific action steps and a specific check-in date.

Funders and authorizers will be told to expect reports on all of these check-in dates, which will detail what was achieved and learned and what will be happening in the next iteration (given the SearchFrame reflections shown in the figure). Part of the learning will be about the problem analysis and assumptions underpinning the nature of each focal point and the timing of the initiative. These lessons will feed into proposals to adjust the SearchFrame, which will be provided to funders and authorizers after every iteration. This fosters joint learning about the realities of doing change, and constant adaptation of assumptions and expectations.

Readers should note that this reflection, learning and adaptation make the SearchFrame a dynamic tool. It is not something to use in the project proposal and then to revisit during the evaluation. It is a tool to use on the journey, as one makes the map from origin to destination. It allows structured reflections on that journey, and report-backs, where all involved get to grow their know-how as they progress, and turn the unknowns into knowns.

We believe this kind of tool fosters a structured iterative process that is both well suited to addressing complex problems and meeting the structural needs of formal project processes. As presented, it is extremely information and learning intensive, requiring constant feedback as well as mechanisms to digest feedback and foster adaptation on the basis of such. This is partly because we believe that active discourse and engagement are vital in a complex change processes, and must therefore be facilitated through the iterations.