Register now for our free PDIA online course

written by Salimah Samji

We are delighted to announce that we will be offering our free PDIA online course once again. This is our third iteration of the course and we will use the recently published “Building State Capability: Evidence, Analysis, Action” book as the core reading.

We will offer two courses tailored to different audiences. Please read the descriptions below to determine which course is the right one for you. You cannot register for both courses.

  • The Practice of PDIA: Building Capability by Delivering Results (February 26- June 18, 2017). This is a 16-week course for practitioners who are in the weeds of development and actually want to learn how to do PDIA. In this course you will have the opportunity to work together as a team, on your nominated problem, using our tools. The course will include video lectures, required reading, assignments, reflection exercises, peer interaction as well as group work. We estimate that the weekly effort will be between 3-5 hours. If you are interested in this course, you will need to identify a problem you want to solve and a team of 4-6 people who will work with you. Enrollment is limited. Please register here.
  • Principles of PDIA: Building Capability by Delivering Results (February 26- May 7, 2017). This is a 10-week course for practitioners who are not directly involved in the implementation of programs but are interested in learning about PDIA. The course will include video lectures, required reading, assignments, reflection exercises as well as peer interaction. We estimate that the weekly effort required will be between 3-5 hours. Enrollment is limited. If you are interested, please register here.

Here are some testimonials from students who have completed a similar version of the Practice of PDIA: Building Capability by Delivering Results.

The PDIA program faculty was truly exceptional, not only because of their expertise and individual intellect and knowledge and research, but also because they understand how to engage participants in different ways. If you are concerned about why and how countries are poor or mired in a vicious cycle of underdevelopment; then this course is just want you need to help unravel the answers to your questions and arm up with the principles and know-how to tackle them.” Abdulrauf Aliyu, Head of Business Development and Strategy, Inteliworx Technologies, Nigeria

A couple of years ago I joined the development industry as a program officer for a bilateral aid agency in Tanzania. Three years down the line I was frustrated: our partners in the government were “always committed” but things were not really moving in the way and pace we hoped they would. In short, nothing much was changing. If anyone asked me at the time who is at fault, I would have hastened to say it was the government. Having done the PDIA course, however, I can appreciate better why things were happening the way they were, and our responsibility as staff members of funding agencies in the reform failures. So I am thrilled that it is possible to do development differently, the PDIA way. It does not promise that it will be easier doing development this way, and it might never get any easier; but I believe it offers a better chance of bringing real and lasting change even if it comes slowly.” Rose Aiko, Independent Consultant, Tanzania

The course was terrific from both a theoretical and practical standpoint. I was amazed about how accurately the issues addressed in the course related to my day-to-day experiences working in development. In fact, our work plan for our upcoming technical assistance program is largely based on PDIA!” Team Leader, Asian Development Bank, Dili, Timor-Leste

“The PDIA course has been for me the learning highlight of this year. The course has given me the knowledge of a process and tools that I was looking since traditional approaches to projects with best practices from elsewhere, solution-based, blueprint-based, with fixed plan, aiming always at system change, etc. do not work in most cases. I have now a set of steps and, more importantly, questions that can guide me in the work with colleagues and partners to understand the context in which we try to introduce change, identify concrete problems that people want to solve, and try to solve them, one at a time.” Arnaldo Pellini, Research Fellow, Overseas Development Institute

As a Project Manager and Solutions Consultant in Nigeria, taking “PDIA: Building Capability by Delivering Results” opened paths to new possibilities for finding and fitting solutions that are based on specific contexts and current realities, by working with clients, communities and policy drivers. At the heart of these possibilities is the realization that no matter what the problem is or how complex it seems, we can start acting immediately. Most importantly, the interactions with peers and access to a growing PDIA Community of Practice provide unlimited potentials for the future.” Abubakar Abdullahi, Managing Principal, The Front Office NG, Nigeria

“Having worked in development for 35 years I recommend this course to all development practitioners. PDIA is a detailed process that will facilitate your design and implementation approach. PDIA has several steps. I believe the adoption of either all of these steps or just some selected steps will improve the design and implementation of your projects and programs, with improved benefits and results.”  John Whittle, Semi-retired and Consulting in Central Asia

“Through the modules of PDIA, I have had a mindset change on how development works and how it could work. It is an approach that has opened my eyes to many things that I had previously struggled to understand in my 15 years of development practice, where I have observed vicious cycles of problems like chronic poverty, corruption, and poor service delivery despite heavy investments by donors and recipient governments. I will continue to see my work with a PDIA lens and assess new projects in the same way. It is exciting to try and do things differently in an effort to get different results from the norm.” Cate Najjuma, Economist, Royal Danish Embassy, Kampala

“The PDIA course is perfectly designed for those who are currently trying to address real world issues. It has contributed to increase my value add on reform issues in Tunisia.  The course is very focused and practical, allowing it to fit into the busy schedule of professionals like me and to learn at an impressive pace.  I definitely recommend it to prospective applicants.” Gomez Agou, IMF Desk Economist, Washington DC

“The PDIA course showed how approaching and solving complex and challenging reform efforts are not pinned on rigid, structured frameworks but rather on a common sense approach bottled in a simple method all rooted on the fundamentals of understanding, clarifying, learning, experimenting and adapting.” Abubakar Sadiq Isa, Managing Director, Inteliworx Technologies, Nigeria

“The PDIA course represents an empirical reform prescription in building state capability by delivering results through theoretical and practical approaches geared toward sustained improvement and performance. Tom Tombekai, Liberia

“I enjoyed taking the course PDIA: Building Capacity by Delivering Results. I have been doing development work in Africa in the anti-corruption area. This course introduced me to some new concepts in terms of building acceptance for ideas and programs and especially understanding the environment in terms of what may be possible and how success should be measured. It has has changed how I will approach future development problems. I very much enjoyed the readings, lectures and interactions with other students from around the world.” Craig Hannaford, Independent Consultant, Canada

“I have also been taught that every problem has got a series of causes and sub-causes. You really have to be very critical in analyzing a problem in order to address it effectively. This is one of the products of PDIA. I find myself thinking outside the box when I have to solve a problem whether in the office, with vendors or even at home. It is in this course that I first heard “deconstruction of a problem”. Deconstruction and sequencing work has helped me to foster actions to solve a problem. Ultimately, through this course PDIA, I have learnt that in the development sector, before bringing solutions to the government, I have to understand the existing practice, positive deviance, latent practice and external best practice. Without this course, I would not be an improved reformer.” Doris Ahuchama, Finance and Administration Manager, Nigeria

 

PDIA Course: Alumni are already practicing what they learned

written by Salimah Samji

We offered 4 free PDIA online courses between November 2015 and June 2016. They were well received and 365 people, living in 56 countries, successfully completed the courses.

pdia-course-one-pager

In January 2017, we surveyed the 365 PDIA course alumni to learn whether (and how) they are using PDIA in their day-to-day lives. 113 (31%) of them, living in 36 countries, responded to the survey. This includes people who work for donors, governments, consulting firms, private sector firms and NGOs.

Here’s what we found:

  • 96% of the respondents have used the key concepts, ideas, and tools.
  • 91% have shared the ideas, concepts, and tools with others. They have shared with co-workers, bosses, and friends; led study and discussion groups;  conducted workshops and trainings; and one organization used the content to train others at an annual retreat!
  • 85% have achieved something by doing PDIA. 

The findings and concrete examples that were shared in the survey have been awe inspiring. People learned the key ideas/concepts/tools we taught, are using them in their work, and are teaching others.

We plan to offer another round of free PDIA online courses soon – stay tuned!


Here are some of the things the course alumni had to say.

I think just appreciating a more building block approach to issues has offered more practical and realistic ways of working.  It has meant accepting that progress may be slower than desired but likely to be more sustainable, because you are starting at the root of the problem and you are working with the grain of political support. – DFID Governance Adviser, Nigeria

The PDIA helped transformed the way I see development administration and governance. I now use a systems thinking frame of mind to see problems and not just throw solutions at them. As professor Clayton Christensen will say, “WHAT IS THE JOB TO BE DONE?” No matter how elegant or beautiful an introduced solution is, if it does not solve people’s problem then it is useless. – Head of business development, Inteliworx Technologies, Nigeria

Before the course, I was approaching problems (ie. corruption) as a large problem to be solved with a complex approach. PDIA taught me to look at the complexities of the problem, the different interests and barriers and how to focus efforts on areas that might actually be amenable to incremental change.  I learned that any program must assess the environment and devote resources where they will be effective.  Analysis of the problem, players and barriers is key before expending resources. – Development Consultant based in Canada

The PDIA course offered some variation in how to think through and act on development problems. As I said in my summing up of the source it is an approach that can be either used in full or parts of it can be merged in with other approaches depending on the context in which one is working/consultingDevelopment Practitioner based in Australia

Download the new PDIA book for free

written by Salimah Samji

We are delighted to inform you that our PDIA book entitled, “Building State Capability: Evidence, Analysis, Action” was just published by Oxford University Press. The book presents an evidence-based analysis of development failures and explains how capability traps emerge and persist. It is not just a critique, it also offers a way of doing things differently. It provides you with the tools you need to personalize and apply these new ideas to your own context.

Here is a review written by Francis Fukuyama

“Building State Capability provides anyone interested in promoting development with practical advice on how to proceed—not by copying imported theoretical models, but through an iterative learning process that takes into account the messy reality of the society in question. The authors draw on their collective years of real-world experience as well as abundant data and get to what is truly the essence of the development problem.”

In keeping with our commitment to provide free resources to help diffuse our PDIA approach to practitioners around the world, we have enabled an open access title under a Creative Commons license (CC BY-NC-ND 4.0). We hope you find the book useful and that it helps create a PDIA community of change that shares, learns and grows together. Visit the book webpage to download your free copy. Please share your thoughts on social media using the hashtag #PDIABook

Listen to what the authors have to say about the book:

 

Best Practice is a Pipe Dream: The AK47 vs M16 debate and development practice

written by Lant Pritchett

At a recent holiday party I was discussing organizations and innovations with a friend of mine who teaches at the Harvard Business School about organizations and is a professor and student about technology and history.  I told him I was thinking about the lessons for the development “best practice” mantra from the AK47 versus M16 debate.  Naturally, he brought out his own versions of both for comparison, the early Colt AR-15 developed for the US Air Force (which became the M16) and an East German produced AK-47.

screen-shot-2017-01-10-at-2-10-43-pm

Development practice can learn from the AK47.  It is far and away the most widely available and used assault rifle in the world.  This is in spite of the fact that it is easy to argue that the M16 is the “best practice” assault rifle.  A key question for armies is whether in practice it is better to adopt the weapon to the soldiers you have or train the soldiers you have to the weapon you want.  The fundamental AK47 design principle is simplicity which leads to robustness in operation and effective use even by poorly trained combatants in actual combat conditions.  In contrast, the M16 is a better weapon on many dimensions—including accuracy–but only works well when used and cared for by highly trained and capable soldiers.

One important criterion for any weapon is accuracy.  In the 1980s the US military compared the AK47 versus the M16 for accuracy at various distances in proving ground conditions that isolated pure weapon accuracy.  The following chart shows the single shot probabilities of hitting a standard silhouette target at various distances in proving ground conditions.  It would be easy to use this chart to argue that the M16 is a “best practice” weapon as at middle to long distances the single shot hit probability is 20 percent higher.

Figure 1:  At proving ground conditions the AK47 is a less accurate weapon than the M16A1 at distances above 200 yards

screen-shot-2017-01-10-at-2-11-05-pmSource:  Table 4.3, Weaver 1990.

The study though also estimates the probability of hitting a target when there are aiming errors of an actual user of the weapon.  In “rifle qualifying” conditions the shooter is under no time or other stress in shooting and knows the distance to target and hence ideal conditions for shooter to demonstrate high capacity.  In “worst field experience” conditions the shooter is under high or combat-like stress, although obviously these data are from simulations of stress as it is impossible to collect reliable data from actual combat.

It is obvious in Figure 2 that over most of range at which assault rifles are used in combat essentially all of the likelihood of missing the target comes from shooter performance and almost none from the intrinsic accuracy of the weapon.  The M16 maintains a proving ground conditions hit probability of 98 percent out to 400 yards but at 400 yards even a trained marksman in zero stress conditions has only a 35 percent chance and under stress this is only 7 percent.

Figure 2:  The intrinsic accuracy of the weapon as assessed on the proving is not a significant constraint to shooter accuracy under high stress conditions of shooting

screen-shot-2017-01-10-at-2-14-11-pm

Source:  Table 4.2, Weaver 1990.

At 200 yards we can decompose the difference from the ideal conditions of “best practice”–the M16 on the proving ground has 100 percent hit probability—and the contribution of a less accurate weapon, user capacity even in ideal conditions, and user performance under stress.  The AK47 is 99 percent accurate, but in in rifle qualifying conditions the hit probability is only 64 percent with the M16 and in stressed situations only 12 percent with the M16.  So if a shooter misses with an AK47 at 200 yards in combat conditions it is almost certainly due to the user and not the weapon. As the author puts it (it what appears to be military use of irony) while there are demonstrable differences in weapon accuracy they are not really an issue in actual use conditions by actual soldiers:

It is not unusual for differences to be found in the intrinsic, technical performance of different weapons measured at a proving ground.  It is almost certain that these differences will not have any operational significance.  It is true, however that the differences in the…rifles shown…are large enough to be a concern to a championship caliber, competition shooter.

Figure 3:  Decomposing the probability of a miss into that due to weapon accuracy (M16 vs AK47), user capacity in ideal conditions, and operational stress

screen-shot-2017-01-10-at-2-15-25-pm

Source:  Figure 2 above, Weaver 1990.

The AK-47’s limitations in intrinsic accuracy appear to be a technological trade-off and an irremediable consequence of the commitment to design simplicity and operational robustness   The design of the weapon has very loose tolerances which means that the gun can be abused in a variety of ways and not properly maintained and yet still fire with high reliability but this does limit accuracy (although the design successor to the AK-47, the currently adopted AK-74 did address accuracy issues).  But a weapon that fires always has higher combat effectiveness than a weapon that doesn’t.

While many would argue that the M16 in the hands of a highly trained professional soldier is a superior weapon, this does require training and adapting the soldier and his practices to the weapon.  The entire philosophy of the AK-47 is to design and adapt the weapon to soldiers who likely have little or no formal education and who are expected to be conscripted and put into battle with little training.  While it is impossible to separate out geopolitics from weapon choice, estimates are that 106 countries’ military or special forces use the AK-47—not to mention its widespread use by informal armed groups—which is a testament to its being adapted to the needs and capabilities of the user.

Application of ideas to basic education in Africa

Now it might seem odd, or even insensitive, to use the analogy of weapon choice to discuss development practice, but the relative importance of (a) latest “best practice” technology or program design or policy versus (b) user capacity versus (c) actual user performance under real world stress as avenues for performance improvement arises again and again in multiple contexts.  There is a powerful, often overwhelming, temptation for experts from developed countries to market the latest of what they know and do as “best practice” in their own conditions without adequate consideration of whether this is actually addressing actual performance in context.

The latest Service Delivery Indicators data that the World Bank has created for several countries in Sub-Saharan Africa illustrate these same issues in basic education.

The first issue is “user capacity in ideal conditions”—that is, do teachers actually know the material they are supposed to teach?  The grade 4 teachers were administered questions from the grade 4 curriculum.  On average only 12.7 percent of teachers scored above 80 percent correct (and this is biased upward by Kenya’s 34 percent as four of six countries’ teachers were at 10 percent or below).  In Mozambique only 65 percent of mathematics teachers could do double digit subtraction with whole numbers (e.g. 86-55) and only 39 percent do subtraction with decimals—and less than 1 percent of teachers scored above 80 percent.

Figure 4:  Teachers in many African countries do not master the material they are intended to teach—only 13 percent of grade 4 teachers score above 80 percent on the grade 4 subject matter

screen-shot-2017-01-10-at-2-15-54-pm

Source:  Filmer (2015) at  RISE conference.

A comparison of the STEP assessment with the PIAAC assessment of literacy in the OPECD found that the typical tertiary graduate in Ghana or Kenya has lower literacy proficiency then the typical OECD adult who did not finish high school.  A comparison of the performance on TIMSS items finds that teachers in African countries like Malawi and Zambia score about the same as grade 7 and 8 students in typical OECD countries like Belgium.

So, even in ideal conditions in which teachers were present and operating at their maximum capacity their performance would be limited by the fact that they themselves do not fully master the subject matters at the level they are intended to teach it.

The second issue is the performance under “operational stress”—which includes both the stresses of life that might lead teachers to not even reach school on any given day as well as the administrative and other stresses that might lead teachers to do less than their ideal capacity.  The Service Delivery Indicators measure actual time teaching versus the deficits due to absence from the school and lack of presence in the classroom when at the school.  The finding is that while the “ideal” teaching/learning time per day is five and a half hours students are actually only exposed to about 3 hours a day of teaching/learning time on average.  In Mozambique the learning time was only an hour and forty minutes a day rather than the official (notional) instructional time of four hours and seventeen minutes.

On top of this pure absence the question is whether under the actual pressure and stress of classrooms even the teaching/learning time is spent at the maximum of the teacher’s subject matter and pedagogical practice capacity.

Figure 5:  Actual teaching/learning time is reduced dramatically by teacher absence from school and classroom

screen-shot-2017-01-10-at-2-18-15-pm

Source:  Filmer (2015) at RISE conference

The “global best practice” versus performance priority mismatch

The range in public sector organizational performance and outcomes across the countries of the world is vast in nearly every domain—education, health, policing, tax collection, environmental regulation (and yes, military).  In some very simple “logistical” tasks there has been convergence (e.g. vaccination rates and vaccination efficacy are very high even in many very low capability countries) but the gap in more “implementation intensive” tasks is enormous.  Measures of early grade child mastery of reading range from almost universal—only 1 percent of Philippine 3rd graders cannot read a single word of linked text whereas 70 percent of Ugandan 3rd graders cannot read at all.

This means that in high performing systems the research questions are pushed to the frontiers of “best practice” both in technology and the applied research of management and operations.  There is no research or application of knowledge in improving performance in tasks that are done well and routinely in actual operational conditions by most or nearly all service providers.  That is taken for granted and not a subject of active interest.  There is research interest in improving the frontier of possibility and interest in practical research into how to increase the capacity of the typical service provider and their performance under actual stressed conditions—but in high performing systems these are both aimed at expanding the frontier of actual and achieved practice in the more difficult tasks.  This learning may be completely irrelevant to what is the priority in low performing systems.  Worse, attempts to transplant “best practice” in technology or organizations or capacity building that is a mismatch for actual capacity or cannot be implemented in the current conditions may lead to distracting national elites from their own conditions and priorities.

What are the lessons of the “best practice” successes of the Finnish schooling system for  Pakistan or Indonesia or Peru?  What are the lessons of  Norway’s “best practice” oil revenue stabilization fund for Nigeria or South Sudan?  What are the lessons of OECD “best practice” for budget and public financial management for Uganda or Nepal?  I am confident there are interesting and relevant lessons to learn, but the experience of the AK-47 should give some pause as to whether a globally relevant “best practice” isn’t a pipe dream.

Figure 6:  Potential mismatch of global “best practice” and research performance priorities in low performance systems.

screen-shot-2017-01-10-at-2-24-01-pm

Thomas C. Schelling’s Contributions to Policy Analysis

Guest blog by Robert Klitgaard

Thomas C. Schelling has been rightly lionized for his contributions in economics, international security, and the transdisciplinary field of game theory. He was also a pioneer in policy analysis. In this note, I want to reflect on what Schelling can teach us about doing policy research.

Though a theorist, he was fascinated by real examples and found them indispensable for developing theory. “In my own thinking,” Schelling wrote in the preface to The Strategy of Conflict (1960, p. vi), “they have never been separate. Motivation for the purer theory came almost exclusively from preoccupation with (and fascination with) ‘applied’ problems; and the clarification of theoretical ideas was absolutely dependent on an identification of live examples.”[1]

This passion led him to topics ranging from foreign aid and international economics to diplomacy, war, and terrorism, from crime to altruism, from collective action to the nature of the self. In the long, discussion-paper version of his “Hockey Helmets” essay (1972), an index shows readers where to locate the many examples he uses along the way because, Schelling noted, they are what many readers most want to find.

Schelling unpacked concepts, rebutted simplistic solutions, and expanded the range of alternatives. “I am drawing a distinction, not a conclusion,” he wrote, prototypically, in an article on organizations. In this piece he distinguished exercising from defining responsibility, standards that impose costs from those that do not, costs arising from an act from those prompted by the fear of that act, wanting to do the right thing from figuring out what the right thing is, discouraging what is wrong from doing what is right, and the firms of economic abstraction from businesses as “small societies comprising many people with different interests, opportunities, information, motivations, and group interests.” Regarding an organization, he noted, “It may be important to know who’s in charge, and it may be as difficult as it is important” (1974, pp. 82, 30, 83).

Policy analysis à la Schelling means analysis that enriches. Through a combination of simplifying theory and elegant example, he forces us to realize that there are not one or two but a multiplicity of, say, military strengths, public goods, types of discrimination, nonviolent behaviors, actions that affect others, ways to value a human life. “My conjectures,” he said of his analysis of various kinds of organized crimes, “may at least help to alert investigators to what they should be looking for; unless one raises the right questions, no amount of hearings and inquiries and investigations will turn up the pertinent answers” (1971, p. 649). Not for him normal science’s quantitative demonstration that a qualitative point from simplifying theory cannot be rejected at the usual level of significance.

And not for him the policy recommendation of what might be called, “normal policy analysis.” Schelling was after enriching principles, and “principles rarely lead straight to policies; policies depend on values and purposes, predictions and estimates, and must usually reflect the relative weight of conflicting principles” (1966, p. vii).

In a little-known essay, Schelling reviewed “the non-accomplishments of policy analysis” in fields from defense to energy to health to education. Policy analysis as customarily practiced has made so little difference because the usual paradigm is wrong.

If policy analysis is the science of rational choice among alternatives, it is dependent on another more imaginative activity—the invention of alternatives worth considering …

The point I would make is that policy analysis may be doomed to inconsequentiality as long as it is thought of within the paradigm of rational choice…

[P]olicy analysis may be most effective when it is viewed within a paradigm of conflict, rather than of rational choice … Analyzing the interests and the participants may be as important as analyzing the issue. Selecting the alternatives to be compared, and selecting the emphasis to be placed on the criteria for evaluation may be what matters, and the creative use of darkness may be as much needed as the judicious use of light. (1985, pp. 27-28)

What is the paradigm of policy analysis that Schelling rejected? Analysts are given the objectives, alternative actions, and perhaps constraints. The analysts then assess the likely effects of the various actions. They calculate which alternative maximizes the objectives, and from this they derive a prescription for action.

This rejected paradigm conceives of the analytical problem as the leap from givens to prescriptions, from the “if” to the “then”. This conception borrows from economics. Under idealized assumptions, economic science is able to derive powerful statements about optimal courses of action. Seduced, the analyst may accept a lot of unrealistic restrictions on the “if” for the thrill of an unassailable “then”. But as Schelling pointed out, in real policy making the intellectual problem is often a different one: how to discover, how to be more creative about, the objectives, the alternatives, and the constraints. In other words, how to understand, expand, and enrich the “if.”

The rejected paradigm says that the policy maker’s problem is deciding among many given courses of action. Schelling’s version turned this around. The problem is understanding, indeed generating, the objectives and the range of alternatives. Once policymakers have done that, they usually do well at making decisions. They are already pretty good at the “then” part; they may need help on the “if.”

On this view, policy analysis provides not so much a set of answers that politicians should adopt and bureaucrats implement, but a set of tools and examples for enriching the appreciation of alternatives and their consequences.

This conception of policy analysis has another implication that has to do with the lamentable reluctance of politicians to adopt and bureaucrats to implement the excellent advice of policy analysts. Under the standard paradigm, it is at first baffling why one’s optimal advice is not pursued—until one notes that, unlike oneself, policymakers and bureaucrats have selfish agendas. Aha.

But to the policy analyst clued in by Thomas C. Schelling, the resistance of politicians and functionaries may mean more. Politicians’ resistance may be a sign that the analyst does not understand the operative “objective functions.” Bureaucrats’ resistance may indicate that the analyst has more to learn about the alternatives and constraints. In most real policy problems, the objectives, alternatives, and constraints are not “given.”

So, when confronted with the apparently stupid or self-serving reluctance of the real world to heed our advice, we should listen carefully and learn. The words and actions of the politicians and the bureaucrats may provide invaluable clues for appreciating what the objectives and alternatives really are and might be. And, after listening, our task as analysts is to use theoretical tools and practical examples to expand and enrich their thinking about objectives, alternatives, and consequences. At least part of the failure of standard policy analysis to make a difference stems from the way many analysts conceive of “answers” in public policy.

Schelling’s style was as distinct as his enriching objective. His papers are essays in the first person, packed with care and taste and touches of humor.

Sometimes promises are enforced by a deity from which broken promises cannot be hidden. “Certain offenses which human law could not punish nor man detect were placed under divine sanction,” according to Kitto. “Perjury is an offense which it may be impossible to prove; therefore it is one which is particularly abhorrent to the gods.” This is a shrewd coupling of economics with deterrence: if divine intervention is scarce, economize it by exploiting the comparative advantage of the gods. If their intelligence system is better than that of the jurists, given them jurisdiction over the crimes that are hardest to detect. The broken promises that are hardest to detect may, like perjury, fall under their jurisdiction. But be careful not to go partners with anyone who does not share your gods. (1989, p. 118)

Stylistically as well as substantively, Schelling recast the predominant paradigm of policy analysis. He was an enricher of the “if,” a catalyst for one’s own creativity. In what he wrote and how, he was aware of the importance of intangibles like perceptions, inclinations, and will—in the policy maker and in the reader as well.[2] Policy analysis in the Schelling style tries to unpack the concept under discussion, even an emotively loaded one; one disaggregates and reclassifies. One approaches a sensitive subject by highlighting not the moral failures of individuals but the structural failures of information and incentives. One uses a simplifying theory to obtain, not an optimizing model under restrictive assumptions, but a framework that stimulates the creativity of policy-makers and managers in their varied and unique circumstances.


[1] An earlier classic on the strategy of conflict contained a similar sentiment: “Just as some plants bear fruit only if they don’t shoot up too high, so in the practical arts the leaves and flowers of theory must be pruned and the plant kept close to its proper soil—experience” (Clausewitz 1976, p. 61).

[2] A military example of this theme: “[W]e are necessarily dealing with the enemy’s intentions—his expectations, his incentives, and the guesses that he makes about our intentions and our expectations and our incentives … This is why so many of the estimates we need for dealing with these problems relate to intangibles. The problem involves intangibles. In particular, it involves the great intangible of what the enemy thinks we think he is going to do” (Schelling 1964, p. 216)

References:

Clausewitz, Carl von. 1976. On War, ed. and trans. Michael Howard and Peter Parfet, Princeton, NJ: Princeton University Press.

Schelling, Thomas C. 1960. The Strategy of Conflict, Cambridge, MA: Harvard University Press.

Schelling, Thomas C. 1964. “Assumptions about Enemy Behavior.” In Analysis for Military Decisions, E.S. Quade, ed., Santa Monica: The RAND Corporation.

Schelling, Thomas C. 1966. Arms and Influence, New Haven, CT: Yale University Press.

Schelling, Thomas C. 1971. “What Is the Business of Organized Crime?” The American Scholar, 40, 4, Autumn.

Schelling, Thomas C. 1972. “Hockey Helmets, Concealed Weapons and Daylight Saving: A Study of Binary Choices with Externalities,” Discussion Paper No. 9, Cambridge, MA: Kennedy School of Government.

Schelling, Thomas C. 1974. “Command and Control,” in Social Responsibility and the Business Predicament, James w. McKie, ed., Washington, D.C.: The Brookings Institution.

Schelling, Thomas C. 1985. “Policy Analysis as a Science of Choice,” in Public Policy and Policy Analysis in India, R.S. Ganapathy et al., eds., New Delhi, Sage.

Schelling. Thomas C. 1989. “Promises,” Negotiation Journal, 5, no. 2, April.

Doing development differently: two years on, what are we learning?

On 17 November 2016, ODI, in collaboration with the Building State Capability program at Harvard University, convened a private workshop bringing together a number of actors from academia, civil society, and donors, to look at how the adaptive development agenda has been put into practice throughout the world. We attempted to draw out some lessons learnt, and chart a way forward for both actors already working in this space, and for actors new to and interested in how to do development differently. You can view the agenda here.

Please find the videos below:

You can also view Duncan Green’s post on the event here.

 

State Capability Matters

written by Lant Pritchett

The Social Progress Index is a new attempt to gauge human well-being across countries that does not rely on standard measures like GDP per capita but rather builds and index of Social Progress from the ground up.  The Social Progress Index is an overall measure and then is divided into three measures:  Basic Human Needs, Foundations of Well-being, and Opportunity. 

The Building State Capability program focuses on new approaches to build the capability of governments and its organizations to carry out its functions—from making and implementing a budget to regulating the environment to maintaining law and order to educating children.

A set of natural questions to ask are:

  • Do countries with more government capability have higher levels of social progress?
  • Does the positive association of government capability and the various measures persist after also controlling for a countries GDP per capita and the extent of democracy?
  • How big is this connection?

The answers are:

  • Yes.
  • Yes.
  • Very big.

Table 1 reports a simple OLS multi-variate regression of the Social Progress Index and its three main components on (natural log) GDP per capita, one measure of state capability, the World Governance Indicator measure of Government Effectiveness, and the Polity IV measure of autocracy to democracy.  All of these are rescaled to 0 (the lowest country) to 100 (the highest country) so that the coefficients across the indicators can be compared.  In this scaling the regression coefficients say that a 1 unit change in say, WGI Government Effectiveness, is associated with a .39 unit change in the Social Progress Index or a .53 point change in the Opportunity index.

Table 1:  State capability matters for well-being

 

Main indices of Social Progress Index and its three components

(all rescaled to 0 to 100)

  ln(GDPPC)

(rescaled)

World Governance Indicators

Gov’t Effectiveness

(rescaled)

Polity

(rescaled)

R2
Social Progress Index Coefficients 0.50 0.39 0.13 0.92
t-stats 12.26 7.96 5.41
Basic Human Needs Coefficients 0.69 0.26 0.00 0.82
t-stats 11.06 3.48 0.05
Foundations of Wellbeing Coefficients 0.56 0.38 0.18 0.86
t-stats 8.46 5.81 5.90
Opportunity Coefficients 0.29 0.53 0.25 0.87
t-stats 4.68 8.91 8.67

The two questions are answered in Table 1 as these regressions say that, for a country of the same GDP per capita and with the same rating on democracy, an improvement in state capability is associated with large improvements in all four indicators of well-being (And these estimates are precise so that we can confidently reject that any of them are zero).

These affects are big, which can be illustrated in two ways.

  • First, there is a massive literature on the connection between GDP per capita (which measures average productivity of a country and hence is a crude indicator of the material basis available) and various indicators of well-being.  This literature tends to find very powerful correlations.  So it is interesting that improvements in government effectiveness are nearly as large as those in GDP per capita.  A one unit improvement in (ln) GDPPC is associated with SPI higher by .5 points and WGI-GE with .39 points (so 80 percent as large).  Interestingly, the impact of state capability is consistently and substantially larger than of POLITY’s rating of democracy.
  • Second, Figure 1 shows the association between the WGI government effectiveness measure and SPI, after controlling for GDP per capita and the POLITY rating of democracy.  This says “for countries with the same GDPC and POLITY how much higher would be expect SPI to be if government effectiveness were higher?”  As the graph shows moving from Venezuela’s capability (which is low for its GDP and POLITY) to Rwanda (which is high for its GDPPC and POLITY) would improve the Social Progress Index by over 20 points (which is the raw gap between say, Bangladesh (37) and the Dominican Republic (59) or between Indonesia (53) and Israel (75).

Of course this kind of data cannot resolve questions of cause and effect (as perhaps social progress or its components lead to greater state capability) but, to the extent these associations reflect a causal impact of state capability on well-being these are impressively large impacts and highlight the need for more attention to understanding not just how to promote economic growth but also how to build the capability of the state and its organizations.