We are proud to announce that Looking Like a State: Techniques of Persistent Failure in State Capability for Implementation co-authored by Matt Andrews, Lant Pritchett and Michael Woolcock won the Faculty Article Award from the Sociology of Development Section of the American Sociological Association (ASA). The award ceremony was held in San Francisco on August 16, 2014. This seminal paper is the foundation of the Building State Capability (BSC) program and the precursor to PDIA. For more information, please read Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA) or watch our Vimeo Channel.
written by Matt Andrews
What do you do if your government has been pursuing reforms for years, with apparent success, but your economy is still not growing? What do you do if the constraint seems to be the limited capacity of government organizations? What do you do if this capacity remains stubbornly low even after years of public sector reforms sponsored by outside partners and based on promising best practice ideas of fly-in-fly-out specialists?
A recent case study of work suported by the Africa Governance Initiative suggests an approach to just such a situation, faced by Rwanda in 2010. The approach is simple.
- Force your own people to look at festering problems in an up-close-and-personal manner, focusing on ‘why’ the problems persist instead of ‘what’ easy solutions outsiders might propose for the problems.
- Swarm the problems with new ideas emerging from those inside your government and from trusted outsiders committed to spending time adapting and translating their ideas to your context (instead of one-size-fits-all solutions coming from short-stay outsiders).
- Experiment in a step-by-step manner, actively, with the ideas, trying them out and seeing what works, why and with what kinds of nips and tucks.
- Learn. Yes, simply learn. Let people reflect on what they have done and absorb what made the difference and what they take from the experiences to carry to other tasks.
This is what I understand the Strategic Capacity Building Initiative (SCBI) was and is about. It is an approach to doing development that seems to have yielded some dynamic results in a relatively short period. The results are substantive, procedural, organizational, and personal. Farmer incomes are up after some of the experiments in the ‘pilot sites initiative’, for instance. The Energy Investment Unit has emerged as the focal point of a new process to increase energy generation and drive energy costs down. Perhaps more important, to me at least, is the fact that talented civil servants have done things they probably never dreamed they could. In my own language, they have become more adaptive—realizing that you can make a difference if you purposefully address real problems you face in an active, experiential and iterative manner. These young policy entrepreneurs and implementers will be in Rwanda for years to come, and hopefully long after SCBI ceases to exist as a program. They are the real success and legacy of the program.
I find this story line appealing. It tells of an approach to development that reflects the principles of problem driven iterative adaptation (PDIA). This SCBI approach is full of common sense but is also oddly revolutionary because it is such a contrast to the way development is commonly done (and, it seems, was done in some of these areas in Rwanda previously). The case shows that a locally problem driven, adaptive process works in complex developing country reform contexts and for this reason should be of interest to anyone working in development.
Not all is rosy and sweet in the story, however, which is true in all narratives on change and real functional reform—including all development narratives I think reflect the general principles of PDIA. There are a number of reflections on how hard it is to develop a real problem driven approach and allow flexibility in finding and fitting new solutions. The case notes that high level leaders demanded results particularly rapidly in some instances, for example, and this led to hurried action, mistakes and tension. The case suggests that this can be overcome with some common-sense ideas, like getting political authorizers to prioritize and ask policy people how long something will take and then agree on realistic time frames. In this respect, it also comments on the importance of focusing attention on a few key issues for deep dive attention rather than a slew of issues that ultimately only get a shallow look. As one Permanent Secretary notes, “Trying to do everything at the same time doesn’t work.” (Seems basic, doesn’t it, but this kind of mundane observation is one I see overlooked across the development agenda).
There are also hints at the importance of getting collaborative relationships right—with high-level authorizers and development partners engaging patiently and yet also expectantly with those in the policy and reform trenches. It seems there are real rewards when those at the top give those in the middle and even lower down the organization some structured space to prove their value. (Again, this seems basic, noting that ‘people’ really matter; but it is a vital observation in development where I thinking ‘policy’ is often seen as more important than people.) The importance of political coalitions and teams, incorporating outsiders and insiders, is also implicit and explicit and a vital take-away for those designing reforms and interventions. These coalitions and teams allowed natural coordination across boundaries (without having to change rigid organizational rules) and cross fertilization that is vital for the emergence of creative new policy ideas.
Beyond these ideas I was perhaps most struck by the lesson that this work is only achieved if people stick to it. It proved challenging and even uncomfortable to force politicians to prioritize, for instance, but this did not lead to the program falling apart. Instead, as the case notes, the SCBI team exerted “push back” on the system and made sure the prioritization was done. It proved hard to get the right people as well, but the SCBI team pivoted around this issue to get at least some of the right people and build on what they had. It proved tough to get disparate distributed agencies to work together and to even understand the importance of linkages, but the challenge was met with more determination. It was hard to take people through the uncomfortable process of problem analysis, where they interrogated existing processes to look for gaps (without jumping to quick but unlikely to work solutions). But the gap analyses went on nonetheless.
The literature on organizational change has a really academic term to describe the quality I think helped the SCBI reformers to stick to their guns when things got tough. It is ‘grit’ and it is vital for effective reform and change. It is the intangible thing that I think the SCBI story is really all about. It helped to keep the reforms going in the starting months that seemed slow and difficult, and it was what kept the growing SCBI team motivated when the actions they were taking were hard and constantly questioned as time consuming, demanding, politically uncomfortable, and (maybe even) downright impossible. Grit is what helps reformers turn setbacks into lessons, and what keeps reformers looking for the ‘right’ people needed to make something happen. It keeps people engaged in capacity building experiences that take time, personal sacrifice, and political capital. It is the magic ingredient behind real capacity building and change and is the one thing I hope other readers see in this case study, even with the other great practical ideas and embedded advice. The SCBI design and strategy was great, but it was the gritty commitment to make it work that really seems to have made the difference.
The lesson: Cultivate grit, don’t overlook it, as it is the key to capacity building success.
written by Michael Woolcock
“What gets measured is what gets done.” It’s perhaps the most over-cited cliché in management circles, but on a good day an array of thoughtfully crafted indicators can indeed usefully guide decision-making (whether to raise/lower interest rates), help track progress towards agreed-upon objectives (to improve child nutrition), and serve to hold key stakeholders accountable (via election outcomes, customer satisfaction surveys).
Sometimes successfully conducting everyday tasks requires multiple indicators: our cars, for example, provide us with a literal “dashboard” of real-time measures that help us navigate the safe passage of a precarious metal box from one place to another. Under these circumstances – where the problem is clear and the measures are well understood – indicators are an essential part of a solution. On a bad day, however, indicators can be part of the problem, for at least four reasons.
- Indicators are only as good as their underlying quality, and yet we can too readily impute to them a stature they don’t deserve, succumbing to their allure of sophistication and a presumption of robust provenance, with potentially disastrous consequences. Many factors can compromise quality, but chief among them are low administrative capability and lack of agreement on what underlying concepts mean. As Morton Jerven’s recent book Poor Numbers documents, most African countries’ economic growth data are in a parlous state, even though more than 60 years have passed since independence and the passage of a UN agreement providing a global community of practice – civil servants in finance and planning ministries – with detailed guidance on how to measure and maintain such data (the System of National Accounts). Generating and maintaining good indicators is itself a costly and challenging organizational capability, but unfortunately we rarely have companion indicators alerting us to the quality of the data on which we are conducting research and discerning policy. Even a seemingly basic demographic variable, age, is not straightforward to measure, as Akhil Gupta demonstrates in his book Red Tape. In some rural areas of India, Gupta notes, many people simply don’t equate the concept of ‘age’ with an indicator called ‘years since birth’: they have no formal administrative document recording their date and place of birth, they don’t celebrate birthdays, and when asked their ‘age’ respond with the particular stage of life in which their community deems them to be. Thus beyond an organization’s capacity to collect and collate data (which is often low) there has to be agreement between respondents, administrators and users on what it is we’re actually measuring; in many countries, neither aspect can be taken for granted even on ‘simple’ concepts (like age) let alone ‘complex’ ones (like justice, or empowerment).
If accepted uncritically, indicators can become the proverbial tail wagging the dog, permitting only those courses of action that can be neatly measured and verified. So we pave roads, immunize babies and construct schools in faithful fulfillment of a “results agenda”, but become reticent to engage with messy, time-consuming and unpredictable tasks like civil service reform or community participation, especially in uncertain places such as fragile states. In an age of declining budgets and heightened public skepticism about the effectiveness of development assistance, some powerful agencies have begun to insist that continued funding be contingent on “demonstrated success” and that priority be given to “proven interventions”. In one sense, of course, this seems eminently sensible; nobody wants to waste time and money, and making hard decisions about the allocation of finite resources to address inherently contentious issues on the basis of “the evidence” sounds like something any field calling itself a profession would routinely do (or at least aspire to). Even the highest quality data, however, in and of itself, tells us very little; the implications of evidence are never self-evident. Changing deeply entrenched attitudes to race relations and gender equality, for example, can proceed along a decidedly non-linear path, with campaigners toiling in apparent obscurity and futility for decades before rapidly succeeding. Consider Nelson Mandela, who spent 27 years in jail before leading a triumphant end to apartheid in South Africa. Taken at face value, an indicator of the success of his “long walk to freedom” campaign at year 26 – ‘still incarcerated’ – would be interpreted as failure, yet perhaps it is in the nature of forging such wrenching political change that it proceeds (or not) along trajectories very different to that of education or health. The substantive and policy significance of change – or lack of change – in even the cleanest indicator generated by the most ‘rigorous’ methodology cannot be discerned in the absence of a dialogue with theory and experience. Moreover, responding effectively to the hardest challenges in development (such as those in fragile states) usually requires extensive local innovation and adaptation; when indicators of successful interventions elsewhere are, in and of themselves, invoked to provide warrant for importing such interventions into novel contexts, they can restrict and weaken, rather than expand and protect, the space wherein locally legitimate solutions can be found and fitted.
As our work on state capability has repeatedly stressed, indicators become part of the problem when they are used to chart apparent progress on a development objective when in reality none may have been achieved at all (e.g., educational attainment as measured by school enrollment versus what students have actually learned). My colleague Matt Andrews shows in his book The Limits of Institutional Reform that the mismatch between what systems “look like” and what they “do” – a phenomena known as isomorphic mimicry – is pervasive in development, enabling millions of dollars in assistance to be faithfully spent and accounted for by donors each year but often with little to show for it by way of improved performance. For example, Global Integrity (a Washington-based NGO) gives Uganda a score of 99 out of 100 for the quality of its anti-corruption laws as written, which sounds great. Yet it scores only 48 in terms of its demonstrated capacity to actually stem corruption, which is obviously not so great. In these circumstances, our indicators, if taken at face value, can exacerbate the gap between form and function if they naïvely measure only the former but mistake it for the latter.
Finally, as important as they are for managing complex processes, indicators tend to be the exclusive domain of the powerful. For many poor and marginalized groups, however, the language of indicators and the formal calculations to which they give rise (e.g., cost-benefit analysis, internal rates of return) are alien to how they encounter, experience, assess, manage and interpret the world; as Victorian novelist George Eliot noted long ago, “[a]ppeals founded on generalizations and statistics require a sympathy ready-made, a moral sentiment already in activity.” Shared sympathies and sentiments are too often assumed rather than demonstrated. This is not an argument against indicators per se, but rather a plea to recognize that they can be – to paraphrase anthropologist James Scott – a weapon against the weak when they render complex local realities ‘legible’ to elites and elite interests at the expense of minority groups whose vernacular knowledge claims – e.g., about the ownership, status and uses of natural resources – are often much more fluid and oral. Indeed, the very process of measuring social categories such as caste, as Nicholas Dirks has shown in his work on the role of the census in colonial India (Castes of Mind), can solidify and consolidate social categories that were once quite permeable and loose, with serious long-term political consequences (e.g., making ‘caste’ salient as a political category for mobilization during elections and other contentious moments). One might call this social science’s version of the Heisenberg Uncertainty Principle: for certain issues, the very act of measuring messes with (perhaps fundamentally alters) the underlying reality administrators are trying to capture. If we should not abandon indicators, we can at least make an effort to democratize them by placing research tools in the hands of those most affected by social change, or being denied services to which they are fully entitled. SEWA, an Indian NGO, has been at the forefront of such ventures, helping slum residents demand better services from the state by training them in how to keep good indicators of the poor services they receive – how many hours a day they are denied electricity, how much money they have to pay in bribes to get clean water, how many days the teachers of their children are absent from school, etc. Not having records of their own on these matters, the state can find itself, unusually, at a disadvantage when challenged by data-wielding slum residents. Similarly, the World Bank’s Justice for the Poor program seeks to document how justice is experienced by the users (not just the ‘providers’) of the prevailing justice system: here too we find that the indicators used to define problems and assess solutions often vary considerably between these two parties. In such situations, greater alignment between them is best forged through an equitable political contest, a “good struggle”, one that imbues the outcome with heightened legitimacy and durability.
In short, the search for more and better indicators is a noble but perilous one. For complex and contentious issues, the initial task is commencing an inclusive conversation rather than ‘getting to yes’ among technocrats as soon as possible. One way to begin this conversation might be to take two of the issues outlined above – the form/function gap, and the user/provider gap – as starting points. This leads to questions such as:
- Do our current indicators assess what organizations look like, or what they do?
- Do they reflect the perspective of those overseeing a particular service, or those seeking to receive it?
- Similarly, what would a change in a given indicator signify to each group? When might the very attainment of one group’s objective come at the expense of another’s?
- Over what time period is it reasonable to expect discernable progress?
At the end of the day, indicators are a means to an end, a part of a solution to a broader set of problems, most notably those concerned with improved service delivery. Putting indicators to work requires attention not just to their internal technical coherence, but how well they are maintained and interpreted, and how useful and useable they are to key stakeholders.
If you are interested in reading more on this topic, see Getting Real about Governance and Governance Indicators.
Most problems in the public sector are wicked hard and need to be deconstructed before they can be solved. In this video, Matt Andrews, builds upon the maternal mortality example and the ishikawa diagram to illustrate how you can sequence a reform in a contextually sensitive way, by involving the stakeholders to create a strategy that has quick-wins, longer-term solutions and identify areas that will require political feasibility and practical implementation capacity. You can watch the video below or on YouTube.
Most problems in the public sector are wicked hard. It is like getting stuck in quick sand. In this video, Matt Andrews, uses an ishikawa or fishbone diagram to illustrate how meta problems can broken down into manageable problems that you can mobilize support for and ultimately solve. You can watch the video below or on YouTube.
Solving problems that matter ensure that you are doing something contextually relevant. In this video, Matt Andrews, uses an example of civil service reform in Uganda to illustrate how constructing local problems is the entry point to begin the search for solutions that ultimately drive change. You can watch the video below or on YouTube.
If you are interested in learning more, watch selling solutions vs. solving problems.
Mimicry is an effective strategy for governments to get short-term support from external development organizations. However, it is an ineffective strategy for building long-term capability. In this video, Matt Andrews uses the lack of fiscal rules in Argentina as an example to illustrate that mimicry does not lead to change. You can watch the video below or on YouTube.
If you are interested in learning more, watch Form does not equal function.
written by Matt Andrews
As I reflect on how change happens in development, 5 themes come to mind. I have written about the importance of moments, muddling, the mundane and multiple men and women. In keeping with the ‘m’s’, today I will emphasize the importance of mobilizers.
These are the people who bring multiple men and women together, encourage them to work beyond the mundane, muddle purposively, and take advantage of or create moments for change. They are people who convene small groups of key agents needed to play specific roles (often in teams or in small authorizing groups), or who connect distributed agents to each other (so the agents don’t even need to interact directly), or who motivate people across networks. These mobilizers are the key to effective leadership, if you ask me, because they bring all the different fucntional roles together. Here is an example of conveners and connectors in action, in a simplified version of a recent reform story I was engaged in. It was work in the judicial sector of a country. A donor had been supporting initiatives to introduce a statistical management system to the sector, so that resource allocation decisions could be more evidence based (everyone would know where case loads were high, where judges and prosecutors were present, where buildings were in place, etc.). After five years and millions of dollars no system existed. This was partly because different groups across the sector did not engage in the reform together. The donor had connections to the ministry of justice and although there were overlaps with the Supreme Court and prosecution, there were no direct connections. Thus the reform was not supported by the other agencies. See my diagram … Sorry it is a mess.
I started working on the issue, and had a local person work with some of the folks in the ministry of justice and the courts and the prosecution, as a convener (see M1 in the figure below). This person worked with me to hold meetings of people from these different agencies, and in this way created first degree relationships across the agencies that helped foster a common understanding of the problems warranting change and of the potential ways the agencies could work together to foster change. The role of one mobilizer created direct links between key agents and indirect links between all agents in the system. This opened up access to new ideas, functions, contributions, etc.
A second type of mobilization was also important, however, and involved another person working as a connector between distributed agents in the ministry of justice, courts and prosecution office (because it is not only important to convene the heads). This person (M2 in the figure below) created relationships between multiple people in the system and allowed them to connect with each other THROUGH him. This connector role ensured that all people in the system had a first or second degree link to other people, which made the network tighter than it was before, opening a path to agreement on reform and enhancing access to talents and ideas needed for reform to work.
The reform is still going step by step, but the connections are better than they have ever been. These connections are proving vital to reform and development and are only possible because of the role of mobilizers in the change process.
written by Matt Andrews
As I reflect on how change happens in development, 5 themes come to mind. I have written about the importance of moments, muddling and the mundane. Today I will discuss the fourth one: multiple men and women matter. In my experience, development and governance reform is about people, not as targets of change, but as agents of change.
This is not a surprising observation but is an important one nonetheless, especially when one considers how little attention development initiatives commonly give to the men and women who have to risk and adapt and work to make change happen and ensure change is sustained. Development initiatives tend to emphasize ideas and money much more than people, even though it is the latter that actually come up with ideas, shape ideas to contexts, and use resources to foster change.
When people are considered in development initiatives, it is often with a narrow lens on ‘champions’ or ‘heroes’. That’s not the picture I see as relevant in the research and applied work I have been engaged in. This work shows me that development and change require multiple functions or roles: we need someone to identify problems, someone to identify solutions, someone to provide money, someone to authorize change activities, someone to motivate and inspire, someone to connect distributed agents, someone to convene smaller groups, someone to provide key resources other than money, and someone who can give an implementation perspective (of the implications of proposed change).
For a variety of theoretical reasons I don’t think we will often find these functions played by one person, or organization. My empirical research suggests that this is true in practice. What I see in my studies is that successful reform requires multiple people providing leadership in a coordinated and synergistic way, such that all the different functional roles are played in an orchestra of change (people who are familiar with Lee Kuan Yew’s view of leadership will relate to the idea of the orchestra).:
- I wrote a paper on leadership in twelve interesting reforms, where I tested whether one person stood out as the major leader. I found that this was not the case at all. Many people were identified as leaders in the cases, all playing different roles in the change process.
In a review of 30 cases of successful reform (from Princeton University’s Innovations for Successful Societies repository) I found that an average of 19 agents were mentioned as playing the roles noted. They all took risks and stood out for providing an important part of the change puzzle.
The research does find that ‘champions’ exist in many cases, however. But being a champion does not mean being multiple people (or wearing multiple hats, or playing multiple functional roles). Instead, my work showed that where a champion exists, he or she plays three specific functional roles: Authorizing, Convening, and Motivating. The champions do not typically play the other roles.
written by Matt Andrews
As I reflect on how change happens in development, 5 themes come to mind. I wrote about the importance of moments which are vital to foster change in complex contexts, and muddling which is important to find and fit reform and change content that fosters real development. Today I will discuss the third one: mundane.
The mundane matters in development. What I mean is simply that everyday, boring, taken for granted events, pressures, relationships, activities and such have a huge influence on prospects for change and development. We think these things are ordinary, banal, and don’t matter. But actually they dominate time and activity, and are the key to ‘getting things done’ and to prospects for change and development.
If mundane processes and pressures do not foster efficient activity, organizations are likely to be inefficient–there will be loads of meetings and people answering emails and writing papers and filling in time sheets and doing due diligence activities but these mundane activities will not foster effective results. Similarly, if the mundane does not support change then change and development will not happen: people will attend meetings but won’t follow-up with new activities because their time is already spoken for by the mundane.
I have seen this more than ever before in some of my reform experiments in 2013. The trouble they ran into had little to do with a lack of ideas or money. Instead, the challenges were mundane: getting people to ‘do’ new things in already full calendars, and to sit in meetings and engage purposively without looking down at the three cell phones on the table in front of them, and more. In all the experiences I have been part of, change only started when these and other mundane influences were managed or even altered.
The problem is two-fold:
- First, development is full of mundanity. Governments and development organizations are the ones Andy Partridge (lead singer of the 80s band XTC) was talking about when he wrote: “We’re horrible mundane, aggressively mundane, individuals. We’re the ninjas of the mundane…”
- Second, the mundane is mundane. What I mean is that most development specialists think it does not matter. “Too boring. Too unimportant. So easy to overcome. Surely not as important as rigorous empirical analysis and fancy new ideas.” But they often find that the mundane crowds out the new activities and empirics–again and again–to limit and undermine development initiatives.
It would be great to see development experts taking the mundane seriously. I think a strategy to identify, manage and even alter the mundane could be more important than most fancy development strategies. And infinitely more valuable than a fancy ‘Science of Delivery’. We need to rethink the mundane, seeing it as the key to getting things done and the key to change; less ordinary and banal and boring and more central to development.