BSC video 24: Selling solutions vs. solving problems

When asked to name a problem, people often name a solution (i.e. the lack of a solution). This leads to designing typical, business as usual type of interventions without addressing the actual problem. In this video, Lant Pritchett, uses an education example to illustrate the difference between problems and solutions. You can watch the video below or on YouTube.

If you are interested in learning more, watch constructing problems to drive change.

BSC Video 23: Putting the typology framework together

An analytical typology can help you answer the question, building capability to do what?

In this video, Lant Pritchett synthesizes the four analytical questions you need to ask in order to determine the implementation capability required for your activity, to create five category types of activities. He uses examples from health and the financial sector to illustrate the tasks by category type. You can watch the video below or on YouTube.

If you are interested in learning more, watch why do we need a typology, is your activity transaction intensive, is your activity locally discretionary, is your activity a service or an obligation and is there a known technology?

BSC Video 22: is there a known technology for your activity?

An analytical typology can help you answer the question, building capability to do what? This is the last of four videos that addresses the analytical questions you need to ask in order to determine the implementation capability required for your activity. In this video, Lant Pritchett explains the meaning of known technology using examples from health and the financial sector. You can watch the video below or on YouTube.

If you are interested in learning more, watch why do we need a typology, is your activity transaction intensive, is your activity locally discretionary, and is your activity a service or an obligation.

BSC Video 21: Is your activity a service or an obligation?

An analytical typology can help you answer the question, building capability to do what? This is the third of four videos that addresses the analytical questions you need to ask in order to determine the implementation capability required for your activity. In this video, Lant Pritchett explains the meaning of service delivery and an imposition of obligation. You can watch the video below or on YouTube.

If you are interested in learning more, watch why do we need a typology, is your activity transaction intensive, and is your activity locally discretionary.

 

BSC Video 20: Is your activity locally discretionary?

An analytical typology can help you answer the question, building capability to do what? This is the second of four videos that addresses the analytical questions you need to ask in order to determine the implementation capability required for your activity. In this video, Lant Pritchett explains the meaning of local discretion using examples from health and the financial sector. You can watch the video below or on YouTube.

If you are interested in learning more, watch why do we need a typology and is your activity transaction intensive.

BSC Video 19: Is your activity transaction intensive?

An analytical typology can help you answer the question, building capability to do what? This is the first of four videos that addresses the analytical questions you need to ask in order to determine the implementation capability required for your activity. In this video, Lant Pritchett explains the meaning of transaction intensive using examples from health and education. You can watch the video below or on YouTube.

If you are interested in learning more, watch capability for policy implementation and why do we need a typology.

BSC Video 18: Why do we need a typology?

An analytical typology can help you answer the question, building capability to do what? In this video, Lant Pritchett uses animals and buildings to illustrate how a typology of implementation capability can differ by appearance as well as by sector. The next four videos will address the four analytical questions you need to ask in order to determine the implementation capability required for your activity. You can watch the video below or on YouTube.

If you are interested in learning more, watch capability for policy implementation and typology of tasks by capability intensity needed for implementation.

BSC Video 15: PDIA – Moving from Mimicry to Results

It is important to understand why development interventions succeed and why they fail. In this video, Lant Pritchett uses a 2×2 matrix to illustrate that PDIA is an attempt to move from failed mimics to effective innovators. You can watch the video below or on YouTube.

If you are interested in learning more, watch Development as four fold transformation, and Capability trapped in a big stuck.

You cannot Juggle without the Struggle: How the USA historically avoided the “Tyranny of Experts”

written by Lant Pritchett

The period between the end of the American Civil War and the end of World War II saw a transformation of America with the rise of dominant large organizations in both the private economy and public life. The economic historian Alfred Chandler’s in The Visible Hand and Scale and Scope documents the rise of “managerial capitalism” with large economic bureaucracies in the railroads, oil, steel, automobiles, electricity, and telecommunications establishing new foundations of a productive economy.

Similarly, in nearly every domain of engagement the organization of the state became more centralized, more bureaucratic, more centrally controlled. Or, as this is typically described, the “Progressive Agenda” caused governance to become more “professional” more “scientific” more “efficient” as bureaucratic hierarchies displaced localism and Ostrom-esque “polycentric” systems.   Historians of the US tell this story about many fields and organizations.

Samuel Hays Conservation And The Gospel Of Efficiency: The Progressive Conservation Movement, 1890-1920 is summarized as: “Against a background of rivers, forests, ranges, and public lands, this book defines two conflicting political processes: the demand for an integrated, controlled development guided by an elite group of scientists and technicians and the demand for a looser system allowing grassroots impulses to have a voice through elected government representatives.” In the end the “modern” organizations of management of publicly owned lands—but only after having to struggle to prove the improved effectiveness of their methods against continued powerful opposition.

Daniel Carpenter’s The Forging of Bureaucratic Autonomy: Reputations, Networks, and Policy Innovation in Executive Agencies, 1862-1928 narrates the rise of (among other organizations) the “modern” Post Office as a centrally controlled civil service bureaucracy. It had to struggle itself into control of the post against powerful political forces and local resistance that supported the former Jacksonian system of locally appointed postmasters.

Education historian David Tyack’s The One Best System: History of American Urban Education tells of the rise of the urban school system as a “modern” and “scientific” organization that struggled—with mixed success—to consolidate and control the myriad of locally controlled schools and school districts. American began the century with 150,000 school districts, down to around 15,000 today.

Hernando de Soto, though not an academic historian like the others, tells “The Missing Lessons of US History” in chapter 5 of his Mystery of Capital. He shows that property rights in the US as the country expanded westward were a constant struggle between local systems of de facto recognition of use and attempts at de jure top-down ordered and rational systems. In this struggle the de facto usually won politically and the law changed to acknowledge the reality, rather than vice versa.

In each of these cases narratives of “scientific” and “efficiency” and “rational” and “ordered” and “modern” were used to justify placing ever greater power into the hands of centralized organizations. But these organizations had to struggle against counter-claims of people and grass-roots movements and local politics who knew they were losing power. Westerners strongly resisted the claims of the “conservation” movement that they should be given consolidate power over land use. Parents resisted being sidelined in their control of the schools.   People with de facto usufruct rights over land resisted the formal legal claims against their control. Local postmasters resisted the increasingly central control of the post office.

In this struggle they used their rights as citizens for a variety of modes expression of opposition and their voice in democratic processes. The result was a messy, protracted, conflicted struggle.

I take one of the main points of Bill Easterly’s new book The Tyranny of Experts: Economists, Dictators, and the Forgotten Rights of the Poor to be that the mainstream “development” process underestimated the necessity of the struggle. That is, it was thought that “modernization” could be achieved as a purely technical exercise in which the demonstrable successful organizations and institutions of the “developed” world could be transplanted to other countries. Why should the newly sovereign country in the 1950s and 1960s “struggle” towards an effective Post Office when there exist working models throughout the world (US Post Office, Royal Mail, Bundespost)? It was believed that “modern” police forces, schools, roads, courts, forest services, water companies—the organizations that make a state capable and deliver what the state promises—could be created without all of the bother of citizens being empowered to not just vote, but also resist, to subvert, to complain, to protest, to organize and agitate.

There are a variety of conjectures as to how and why this possibility of “transplantation” through development could actually, in some instances, make things worse.

One, organizations could gain legitimacy simply from mimicking the forms of rich country organizations without their function. Sociologists of organizations call this “isomorphism” and describe the pernicious impacts of allowing organizations to attract support without having to demonstrate superior functionality.

Two, the availability of resources from development agencies who were more familiar with and expected to see “modern” organizations meant that accountability to citizens could be attenuated.

Three, by changing the nature of the struggle the “experts” were not forced into a process of testing their ideas and notions and ways of “seeing like a state” against direct and immediate feedback—and push back—from local realities. Mistakes of mismatch between what the “experts” recommended and the reality of what could work in the local context could be larger and persist longer when insulated from the test of functionality.

But you cannot juggle without the struggle. The fact that someone else can juggle, and can show you how to juggle, and describe juggling in great detail does not mean that functionality is transferable. By changing the nature of the struggle many developing countries are stuck with state organizations that just cannot juggle.

Rigorous Evidence Isn’t

written by Lant Pritchett

Currently, there are many statements floating around in development about the use of “rigorous evidence” in formulating policies and programs. Nearly all of these claims are fatuous. The problem is, rigorous evidence isn’t.

That is, suppose one generates some evidence about the impact of some programmatic or policy intervention in one particular context that is agreed by all to be “rigorous” because it meets methodological criteria for internal validity of its causal claims. But the instant this evidence is used in formulating policy it isn’t rigorous evidence any more.  Evidence would be “rigorous” about predicting the future impact of the adoption of a policy only if the conditions under which the policy was to be implemented were exactly the same in every relevant dimension as that under which the “rigorous” evidence was generated.  But that can never be so because neither economics—nor any other social science—have theoretically sound and empirically validated invariance laws that specify what “exactly the same” conditions would be.

So most uses of rigorous evidence aren’t.  Take, for instance, the justly famous 2007 JPE paper by Ben Olken on the impact of certain types of monitoring on certain types of corruption. According to Google Scholar as of today, this paper has been cited 637 times.  The question is, for how many of the uses of this “rigorous evidence” is it really “rigorous evidence”?  We (well, my assistant) sampled 50 of the citing papers with 57 unique mentions of Olken (2007).  Only 8 of those papers were about Indonesia (Of course even those 8 are only even arguably “rigorous” applications as they might be about different programs or different mechanisms or different contexts.)  47 of the 57 (82%) of the mentions are neither about Indonesia nor even an East Asia or Pacific country—they might be a review of the literature about corruption in general, about another country, or methodological.  We also tracked whether the words “context” or “external validity” appeared within +/- two paragraphs of the mention. In 34 of the 57 (60%) mentions, the evidence was not about Indonesia and did not mention that the results, while “rigorous” for the time, place and programmatic/policy context, have no claim to be rigorous about any other time, place, or programmatic/policy context.

Another justly famous paper, Angrist and Lavy (1999) in the QJE uses regression discontinuity to identify the impact of class size on student achievement in Israel.  This paper has been cited 1244 times.  I looked through the first 150 citations to this paper (which Google Scholar sorts by the number of times the citing paper has itself been cited) and (other than other papers by the authors) not one mentioned Israel  (not that surprisingly, as Israel is a tiny country) in the title or abstract while China, India, Bangladesh, Cambodia, Bolivia, UK, Wales, USA (various states and cities), Kenya and South Africa all figured.  Angrist and Lavy do not, and do not claim to, provide “rigorous” evidence about any of those contexts.

If one is formulating policies or programs for attacking corruption in highway procurement in Peru or reducing class size in secondary school in Thailand, it is impossible to base those policies on “rigorous evidence” as evidence that is rigorous for Indonesia or Israel isn’t rigorous for these other countries.

Now, some might make the argument that formulation of policies or programs in context X should rely exclusively/primarily/preferentially on evidence that is “rigorous” in context Z because at least we know that in context Z in which it was generated the evidence is internally valid.  This is both fatuous and false as a general proposition.

Fatuous in that no one understands the phrase “policy based on rigorous evidence” to mean “policy based on evidence that isn’t rigorous with respect to the actual policy context to which it is being applied (because there are no rigorous claims to external validity) but rather based on evidence that is rigorous in some other context.”  No one understands it that way because that isn’t rigorous evidence.

It is also false as a general proposition.  It is easy to construct plausible empirical examples in which the evidence suggests that the bias from internal validity is much (much) smaller than the bias from external validity as the contextual variation in “true” impact is much larger than the methodological bias from lack of “clean” causal identification of simple methods.  In these instances, better policy is made using “bad” (e.g. not internally valid) evidence from the same context than “rigorous” evidence from another context (e.g. Pritchett and Sandefur 2013).

Sadly perhaps, there is no shortcut around using judgment and wisdom in assessing all of the available evidence in formulating policies and programs.  Slogans like “rigorous evidence” are an abuse, not a use, of social science.