We are proud to announce that Looking Like a State: Techniques of Persistent Failure in State Capability for Implementation co-authored by Matt Andrews, Lant Pritchett and Michael Woolcock won the Faculty Article Award from the Sociology of Development Section of the American Sociological Association (ASA). The award ceremony was held in San Francisco on August 16, 2014. This seminal paper is the foundation of the Building State Capability (BSC) program and the precursor to PDIA. For more information, please read Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA) or watch our Vimeo Channel.
written by Matt Andrews
What do you do if your government has been pursuing reforms for years, with apparent success, but your economy is still not growing? What do you do if the constraint seems to be the limited capacity of government organizations? What do you do if this capacity remains stubbornly low even after years of public sector reforms sponsored by outside partners and based on promising best practice ideas of fly-in-fly-out specialists?
A recent case study of work suported by the Africa Governance Initiative suggests an approach to just such a situation, faced by Rwanda in 2010. The approach is simple.
- Force your own people to look at festering problems in an up-close-and-personal manner, focusing on ‘why’ the problems persist instead of ‘what’ easy solutions outsiders might propose for the problems.
- Swarm the problems with new ideas emerging from those inside your government and from trusted outsiders committed to spending time adapting and translating their ideas to your context (instead of one-size-fits-all solutions coming from short-stay outsiders).
- Experiment in a step-by-step manner, actively, with the ideas, trying them out and seeing what works, why and with what kinds of nips and tucks.
- Learn. Yes, simply learn. Let people reflect on what they have done and absorb what made the difference and what they take from the experiences to carry to other tasks.
This is what I understand the Strategic Capacity Building Initiative (SCBI) was and is about. It is an approach to doing development that seems to have yielded some dynamic results in a relatively short period. The results are substantive, procedural, organizational, and personal. Farmer incomes are up after some of the experiments in the ‘pilot sites initiative’, for instance. The Energy Investment Unit has emerged as the focal point of a new process to increase energy generation and drive energy costs down. Perhaps more important, to me at least, is the fact that talented civil servants have done things they probably never dreamed they could. In my own language, they have become more adaptive—realizing that you can make a difference if you purposefully address real problems you face in an active, experiential and iterative manner. These young policy entrepreneurs and implementers will be in Rwanda for years to come, and hopefully long after SCBI ceases to exist as a program. They are the real success and legacy of the program.
I find this story line appealing. It tells of an approach to development that reflects the principles of problem driven iterative adaptation (PDIA). This SCBI approach is full of common sense but is also oddly revolutionary because it is such a contrast to the way development is commonly done (and, it seems, was done in some of these areas in Rwanda previously). The case shows that a locally problem driven, adaptive process works in complex developing country reform contexts and for this reason should be of interest to anyone working in development.
Not all is rosy and sweet in the story, however, which is true in all narratives on change and real functional reform—including all development narratives I think reflect the general principles of PDIA. There are a number of reflections on how hard it is to develop a real problem driven approach and allow flexibility in finding and fitting new solutions. The case notes that high level leaders demanded results particularly rapidly in some instances, for example, and this led to hurried action, mistakes and tension. The case suggests that this can be overcome with some common-sense ideas, like getting political authorizers to prioritize and ask policy people how long something will take and then agree on realistic time frames. In this respect, it also comments on the importance of focusing attention on a few key issues for deep dive attention rather than a slew of issues that ultimately only get a shallow look. As one Permanent Secretary notes, “Trying to do everything at the same time doesn’t work.” (Seems basic, doesn’t it, but this kind of mundane observation is one I see overlooked across the development agenda).
There are also hints at the importance of getting collaborative relationships right—with high-level authorizers and development partners engaging patiently and yet also expectantly with those in the policy and reform trenches. It seems there are real rewards when those at the top give those in the middle and even lower down the organization some structured space to prove their value. (Again, this seems basic, noting that ‘people’ really matter; but it is a vital observation in development where I thinking ‘policy’ is often seen as more important than people.) The importance of political coalitions and teams, incorporating outsiders and insiders, is also implicit and explicit and a vital take-away for those designing reforms and interventions. These coalitions and teams allowed natural coordination across boundaries (without having to change rigid organizational rules) and cross fertilization that is vital for the emergence of creative new policy ideas.
Beyond these ideas I was perhaps most struck by the lesson that this work is only achieved if people stick to it. It proved challenging and even uncomfortable to force politicians to prioritize, for instance, but this did not lead to the program falling apart. Instead, as the case notes, the SCBI team exerted “push back” on the system and made sure the prioritization was done. It proved hard to get the right people as well, but the SCBI team pivoted around this issue to get at least some of the right people and build on what they had. It proved tough to get disparate distributed agencies to work together and to even understand the importance of linkages, but the challenge was met with more determination. It was hard to take people through the uncomfortable process of problem analysis, where they interrogated existing processes to look for gaps (without jumping to quick but unlikely to work solutions). But the gap analyses went on nonetheless.
The literature on organizational change has a really academic term to describe the quality I think helped the SCBI reformers to stick to their guns when things got tough. It is ‘grit’ and it is vital for effective reform and change. It is the intangible thing that I think the SCBI story is really all about. It helped to keep the reforms going in the starting months that seemed slow and difficult, and it was what kept the growing SCBI team motivated when the actions they were taking were hard and constantly questioned as time consuming, demanding, politically uncomfortable, and (maybe even) downright impossible. Grit is what helps reformers turn setbacks into lessons, and what keeps reformers looking for the ‘right’ people needed to make something happen. It keeps people engaged in capacity building experiences that take time, personal sacrifice, and political capital. It is the magic ingredient behind real capacity building and change and is the one thing I hope other readers see in this case study, even with the other great practical ideas and embedded advice. The SCBI design and strategy was great, but it was the gritty commitment to make it work that really seems to have made the difference.
The lesson: Cultivate grit, don’t overlook it, as it is the key to capacity building success.
written by Michael Woolcock
“What gets measured is what gets done.” It’s perhaps the most over-cited cliché in management circles, but on a good day an array of thoughtfully crafted indicators can indeed usefully guide decision-making (whether to raise/lower interest rates), help track progress towards agreed-upon objectives (to improve child nutrition), and serve to hold key stakeholders accountable (via election outcomes, customer satisfaction surveys).
Sometimes successfully conducting everyday tasks requires multiple indicators: our cars, for example, provide us with a literal “dashboard” of real-time measures that help us navigate the safe passage of a precarious metal box from one place to another. Under these circumstances – where the problem is clear and the measures are well understood – indicators are an essential part of a solution. On a bad day, however, indicators can be part of the problem, for at least four reasons.
- Indicators are only as good as their underlying quality, and yet we can too readily impute to them a stature they don’t deserve, succumbing to their allure of sophistication and a presumption of robust provenance, with potentially disastrous consequences. Many factors can compromise quality, but chief among them are low administrative capability and lack of agreement on what underlying concepts mean. As Morton Jerven’s recent book Poor Numbers documents, most African countries’ economic growth data are in a parlous state, even though more than 60 years have passed since independence and the passage of a UN agreement providing a global community of practice – civil servants in finance and planning ministries – with detailed guidance on how to measure and maintain such data (the System of National Accounts). Generating and maintaining good indicators is itself a costly and challenging organizational capability, but unfortunately we rarely have companion indicators alerting us to the quality of the data on which we are conducting research and discerning policy. Even a seemingly basic demographic variable, age, is not straightforward to measure, as Akhil Gupta demonstrates in his book Red Tape. In some rural areas of India, Gupta notes, many people simply don’t equate the concept of ‘age’ with an indicator called ‘years since birth’: they have no formal administrative document recording their date and place of birth, they don’t celebrate birthdays, and when asked their ‘age’ respond with the particular stage of life in which their community deems them to be. Thus beyond an organization’s capacity to collect and collate data (which is often low) there has to be agreement between respondents, administrators and users on what it is we’re actually measuring; in many countries, neither aspect can be taken for granted even on ‘simple’ concepts (like age) let alone ‘complex’ ones (like justice, or empowerment).
If accepted uncritically, indicators can become the proverbial tail wagging the dog, permitting only those courses of action that can be neatly measured and verified. So we pave roads, immunize babies and construct schools in faithful fulfillment of a “results agenda”, but become reticent to engage with messy, time-consuming and unpredictable tasks like civil service reform or community participation, especially in uncertain places such as fragile states. In an age of declining budgets and heightened public skepticism about the effectiveness of development assistance, some powerful agencies have begun to insist that continued funding be contingent on “demonstrated success” and that priority be given to “proven interventions”. In one sense, of course, this seems eminently sensible; nobody wants to waste time and money, and making hard decisions about the allocation of finite resources to address inherently contentious issues on the basis of “the evidence” sounds like something any field calling itself a profession would routinely do (or at least aspire to). Even the highest quality data, however, in and of itself, tells us very little; the implications of evidence are never self-evident. Changing deeply entrenched attitudes to race relations and gender equality, for example, can proceed along a decidedly non-linear path, with campaigners toiling in apparent obscurity and futility for decades before rapidly succeeding. Consider Nelson Mandela, who spent 27 years in jail before leading a triumphant end to apartheid in South Africa. Taken at face value, an indicator of the success of his “long walk to freedom” campaign at year 26 – ‘still incarcerated’ – would be interpreted as failure, yet perhaps it is in the nature of forging such wrenching political change that it proceeds (or not) along trajectories very different to that of education or health. The substantive and policy significance of change – or lack of change – in even the cleanest indicator generated by the most ‘rigorous’ methodology cannot be discerned in the absence of a dialogue with theory and experience. Moreover, responding effectively to the hardest challenges in development (such as those in fragile states) usually requires extensive local innovation and adaptation; when indicators of successful interventions elsewhere are, in and of themselves, invoked to provide warrant for importing such interventions into novel contexts, they can restrict and weaken, rather than expand and protect, the space wherein locally legitimate solutions can be found and fitted.
As our work on state capability has repeatedly stressed, indicators become part of the problem when they are used to chart apparent progress on a development objective when in reality none may have been achieved at all (e.g., educational attainment as measured by school enrollment versus what students have actually learned). My colleague Matt Andrews shows in his book The Limits of Institutional Reform that the mismatch between what systems “look like” and what they “do” – a phenomena known as isomorphic mimicry – is pervasive in development, enabling millions of dollars in assistance to be faithfully spent and accounted for by donors each year but often with little to show for it by way of improved performance. For example, Global Integrity (a Washington-based NGO) gives Uganda a score of 99 out of 100 for the quality of its anti-corruption laws as written, which sounds great. Yet it scores only 48 in terms of its demonstrated capacity to actually stem corruption, which is obviously not so great. In these circumstances, our indicators, if taken at face value, can exacerbate the gap between form and function if they naïvely measure only the former but mistake it for the latter.
Finally, as important as they are for managing complex processes, indicators tend to be the exclusive domain of the powerful. For many poor and marginalized groups, however, the language of indicators and the formal calculations to which they give rise (e.g., cost-benefit analysis, internal rates of return) are alien to how they encounter, experience, assess, manage and interpret the world; as Victorian novelist George Eliot noted long ago, “[a]ppeals founded on generalizations and statistics require a sympathy ready-made, a moral sentiment already in activity.” Shared sympathies and sentiments are too often assumed rather than demonstrated. This is not an argument against indicators per se, but rather a plea to recognize that they can be – to paraphrase anthropologist James Scott – a weapon against the weak when they render complex local realities ‘legible’ to elites and elite interests at the expense of minority groups whose vernacular knowledge claims – e.g., about the ownership, status and uses of natural resources – are often much more fluid and oral. Indeed, the very process of measuring social categories such as caste, as Nicholas Dirks has shown in his work on the role of the census in colonial India (Castes of Mind), can solidify and consolidate social categories that were once quite permeable and loose, with serious long-term political consequences (e.g., making ‘caste’ salient as a political category for mobilization during elections and other contentious moments). One might call this social science’s version of the Heisenberg Uncertainty Principle: for certain issues, the very act of measuring messes with (perhaps fundamentally alters) the underlying reality administrators are trying to capture. If we should not abandon indicators, we can at least make an effort to democratize them by placing research tools in the hands of those most affected by social change, or being denied services to which they are fully entitled. SEWA, an Indian NGO, has been at the forefront of such ventures, helping slum residents demand better services from the state by training them in how to keep good indicators of the poor services they receive – how many hours a day they are denied electricity, how much money they have to pay in bribes to get clean water, how many days the teachers of their children are absent from school, etc. Not having records of their own on these matters, the state can find itself, unusually, at a disadvantage when challenged by data-wielding slum residents. Similarly, the World Bank’s Justice for the Poor program seeks to document how justice is experienced by the users (not just the ‘providers’) of the prevailing justice system: here too we find that the indicators used to define problems and assess solutions often vary considerably between these two parties. In such situations, greater alignment between them is best forged through an equitable political contest, a “good struggle”, one that imbues the outcome with heightened legitimacy and durability.
In short, the search for more and better indicators is a noble but perilous one. For complex and contentious issues, the initial task is commencing an inclusive conversation rather than ‘getting to yes’ among technocrats as soon as possible. One way to begin this conversation might be to take two of the issues outlined above – the form/function gap, and the user/provider gap – as starting points. This leads to questions such as:
- Do our current indicators assess what organizations look like, or what they do?
- Do they reflect the perspective of those overseeing a particular service, or those seeking to receive it?
- Similarly, what would a change in a given indicator signify to each group? When might the very attainment of one group’s objective come at the expense of another’s?
- Over what time period is it reasonable to expect discernable progress?
At the end of the day, indicators are a means to an end, a part of a solution to a broader set of problems, most notably those concerned with improved service delivery. Putting indicators to work requires attention not just to their internal technical coherence, but how well they are maintained and interpreted, and how useful and useable they are to key stakeholders.
If you are interested in reading more on this topic, see Getting Real about Governance and Governance Indicators.
Most problems in the public sector are wicked hard and need to be deconstructed before they can be solved. In this video, Matt Andrews, builds upon the maternal mortality example and the ishikawa diagram to illustrate how you can sequence a reform in a contextually sensitive way, by involving the stakeholders to create a strategy that has quick-wins, longer-term solutions and identify areas that will require political feasibility and practical implementation capacity. You can watch the video below or on YouTube.
Most problems in the public sector are wicked hard. It is like getting stuck in quick sand. In this video, Matt Andrews, uses an ishikawa or fishbone diagram to illustrate how meta problems can broken down into manageable problems that you can mobilize support for and ultimately solve. You can watch the video below or on YouTube.
Solving problems that matter ensure that you are doing something contextually relevant. In this video, Matt Andrews, uses an example of civil service reform in Uganda to illustrate how constructing local problems is the entry point to begin the search for solutions that ultimately drive change. You can watch the video below or on YouTube.
If you are interested in learning more, watch selling solutions vs. solving problems.
Mimicry is an effective strategy for governments to get short-term support from external development organizations. However, it is an ineffective strategy for building long-term capability. In this video, Matt Andrews uses the lack of fiscal rules in Argentina as an example to illustrate that mimicry does not lead to change. You can watch the video below or on YouTube.
If you are interested in learning more, watch Form does not equal function.