BSC video 35: Functional indicators as a way to build capacity

Data is an important part of state functionality. If states want to be capable they need to know where people are, how many people there are, etc. in order to deliver basic services. They need to measure functionality and ends rather than forms.

In this video, Matt Andrews, uses child registration to illustrate what a functional indicator means and how it has the potential to build the capacity to deliver services to its people. You can watch this video below or on YouTube.

If you are interested in learning more, read Getting real about governance and governance indicators and measuring success: means or ends?

BSC video 34: Measuring success – means or ends?

Do appearances matter as much as action does? We often forget that governments exist to do and not just to be. We thus focus on the means of being rather than the product of doing. This bias leads to governance indicators and reforms that emphasize perfection of means, often failing to make a connection to the ends or even clarifying which ends matter.
 
To address the fact that different ends might matter in different places at different times, and different ends might warrant different means, Matt Andrews, offers an ends-means approach, which begins by asking what governments do rather than how they do them. This approach of looking at the multi-dimensional nature of governance could be very useful in the current discussions about including a governance indicator in the post 2015 development goals. You can watch this video below or on YouTube.
 
 

   

Putting indicators to work in local governance reform

written by Michael Woolcock

What gets measured is what gets done.” It’s perhaps the most over-cited cliché in management circles, but on a good day an array of thoughtfully crafted indicators can indeed usefully guide decision-making (whether to raise/lower interest rates), help track progress towards agreed-upon objectives (to improve child nutrition), and serve to hold key stakeholders accountable (via election outcomes, customer satisfaction surveys).

Sometimes successfully conducting everyday tasks requires multiple indicators: our cars, for example, provide us with a literal “dashboard” of real-time measures that help us navigate the safe passage of a precarious metal box from one place to another. Under these circumstances – where the problem is clear and the measures are well understood – indicators are an essential part of a solution. On a bad day, however, indicators can be part of the problem, for at least four reasons.

  1. Indicators are only as good as their underlying quality, and yet we can too readily impute to them a stature they don’t deserve, succumbing to their allure of sophistication and a presumption of robust provenance, with potentially disastrous consequences. Many factors can compromise quality, but chief among them are low administrative capability and lack of agreement on what underlying concepts mean. As Morton Jerven’s recent book Poor Numbers documents, most African countries’ economic growth data are in a parlous state, even though more than 60 years have passed since independence and the passage of a UN agreement providing a global community of practice – civil servants in finance and planning ministries – with detailed guidance on how to measure and maintain such data (the System of National Accounts). Generating and maintaining good indicators is itself a costly and challenging organizational capability, but unfortunately we rarely have companion indicators alerting us to the quality of the data on which we are conducting research and discerning policy. Even a seemingly basic demographic variable, age, is not straightforward to measure, as Akhil Gupta demonstrates in his book Red Tape. In some rural areas of India, Gupta notes, many people simply don’t equate the concept of ‘age’ with an indicator called ‘years since birth’: they have no formal administrative document recording their date and place of birth, they don’t celebrate birthdays, and when asked their ‘age’ respond with the particular stage of life in which their community deems them to be. Thus beyond an organization’s capacity to collect and collate data (which is often low) there has to be agreement between respondents, administrators and users on what it is we’re actually measuring; in many countries, neither aspect can be taken for granted even on ‘simple’ concepts (like age) let alone ‘complex’ ones (like justice, or empowerment).

  2. If accepted uncritically, indicators can become the proverbial tail wagging the dog, permitting only those courses of action that can be neatly measured and verified. So we pave roads, immunize babies and construct schools in faithful fulfillment of a “results agenda”, but become reticent to engage with messy, time-consuming and unpredictable tasks like civil service reform or community participation, especially in uncertain places such as fragile states. In an age of declining budgets and heightened public skepticism about the effectiveness of development assistance, some powerful agencies have begun to insist that continued funding be contingent on “demonstrated success” and that priority be given to “proven interventions”. In one sense, of course, this seems eminently sensible; nobody wants to waste time and money, and making hard decisions about the allocation of finite resources to address inherently contentious issues on the basis of “the evidence” sounds like something any field calling itself a profession would routinely do (or at least aspire to). Even the highest quality data, however, in and of itself, tells us very little; the implications of evidence are never self-evident. Changing deeply entrenched attitudes to race relations and gender equality, for example, can proceed along a decidedly non-linear path, with campaigners toiling in apparent obscurity and futility for decades before rapidly succeeding. Consider Nelson Mandela, who spent 27 years in jail before leading a triumphant end to apartheid in South Africa. Taken at face value, an indicator of the success of his “long walk to freedom” campaign at year 26 – ‘still incarcerated’ – would be interpreted as failure, yet perhaps it is in the nature of forging such wrenching political change that it proceeds (or not) along trajectories very different to that of education or health. The substantive and policy significance of change – or lack of change – in even the cleanest indicator generated by the most ‘rigorous’ methodology cannot be discerned in the absence of a dialogue with theory and experience. Moreover, responding effectively to the hardest challenges in development (such as those in fragile states) usually requires extensive local innovation and adaptation; when indicators of successful interventions elsewhere are, in and of themselves, invoked to provide warrant for importing such interventions into novel contexts, they can restrict and weaken, rather than expand and protect, the space wherein locally legitimate solutions can be found and fitted.

  3. As our work on state capability has repeatedly stressed, indicators become part of the problem when they are used to chart apparent progress on a development objective when in reality none may have been achieved at all (e.g., educational attainment as measured by school enrollment versus what students have actually learned). My colleague Matt Andrews shows in his book The Limits of Institutional Reform that the mismatch between what systems “look like” and what they “do” – a phenomena known as isomorphic mimicry – is pervasive in development, enabling millions of dollars in assistance to be faithfully spent and accounted for by donors each year but often with little to show for it by way of improved performance. For example, Global Integrity (a Washington-based NGO) gives Uganda a score of 99 out of 100 for the quality of its anti-corruption laws as written, which sounds great. Yet it scores only 48 in terms of its demonstrated capacity to actually stem corruption, which is obviously not so great. In these circumstances, our indicators, if taken at face value, can exacerbate the gap between form and function if they naïvely measure only the former but mistake it for the latter.

  4. Finally, as important as they are for managing complex processes, indicators tend to be the exclusive domain of the powerful. For many poor and marginalized groups, however, the language of indicators and the formal calculations to which they give rise (e.g., cost-benefit analysis, internal rates of return) are alien to how they encounter, experience, assess, manage and interpret the world; as Victorian novelist George Eliot noted long ago, “[a]ppeals founded on generalizations and statistics require a sympathy ready-made, a moral sentiment already in activity.” Shared sympathies and sentiments are too often assumed rather than demonstrated. This is not an argument against indicators per se, but rather a plea to recognize that they can be – to paraphrase anthropologist James Scott – a weapon against the weak when they render complex local realities ‘legible’ to elites and elite interests at the expense of minority groups whose vernacular knowledge claims – e.g., about the ownership, status and uses of natural resources – are often much more fluid and oral. Indeed, the very process of measuring social categories such as caste, as Nicholas Dirks has shown in his work on the role of the census in colonial India (Castes of Mind), can solidify and consolidate social categories that were once quite permeable and loose, with serious long-term political consequences (e.g., making ‘caste’ salient as a political category for mobilization during elections and other contentious moments). One might call this social science’s version of the Heisenberg Uncertainty Principle: for certain issues, the very act of measuring messes with (perhaps fundamentally alters) the underlying reality administrators are trying to capture. If we should not abandon indicators, we can at least make an effort to democratize them by placing research tools in the hands of those most affected by social change, or being denied services to which they are fully entitled. SEWA, an Indian NGO, has been at the forefront of such ventures, helping slum residents demand better services from the state by training them in how to keep good indicators of the poor services they receive – how many hours a day they are denied electricity, how much money they have to pay in bribes to get clean water, how many days the teachers of their children are absent from school, etc. Not having records of their own on these matters, the state can find itself, unusually, at a disadvantage when challenged by data-wielding slum residents. Similarly, the World Bank’s Justice for the Poor program seeks to document how justice is experienced by the users (not just the ‘providers’) of the prevailing justice system: here too we find that the indicators used to define problems and assess solutions often vary considerably between these two parties. In such situations, greater alignment between them is best forged through an equitable political contest, a “good struggle”, one that imbues the outcome with heightened legitimacy and durability.

In short, the search for more and better indicators is a noble but perilous one. For complex and contentious issues, the initial task is commencing an inclusive conversation rather than ‘getting to yes’ among technocrats as soon as possible. One way to begin this conversation might be to take two of the issues outlined above – the form/function gap, and the user/provider gap – as starting points. This leads to questions such as:

  • Do our current indicators assess what organizations look like, or what they do?
  • Do they reflect the perspective of those overseeing a particular service, or those seeking to receive it?
  • Similarly, what would a change in a given indicator signify to each group? When might the very attainment of one group’s objective come at the expense of another’s?
  • Over what time period is it reasonable to expect discernable progress?

At the end of the day, indicators are a means to an end, a part of a solution to a broader set of problems, most notably those concerned with improved service delivery. Putting indicators to work requires attention not just to their internal technical coherence, but how well they are maintained and interpreted, and how useful and useable they are to key stakeholders.

If you are interested in reading more on this topic, see Getting Real about Governance and Governance Indicators.

Getting Real about Governance and Governance Indicators

written by Matt Andrews

Many have asked me how I personally think about governance and assess governance when I visit countries. I have a new working paper that presents my thoughts on this. These thoughts manifest in what I call an ends-means approach to looking at governance.

I focus on ends as a starting point in looking at governance because these reflect the revealed functionality or capability of states—what they can do. I think that revealed capabilities and ends are ignored in much of the current governance discussion because of a bias towards questions about form and preferred means of governing. The bias manifests in reform programs that introduce commonly agreed-upon and apparently ‘good’ means of managing public finances, structuring regulatory frameworks, procuring goods, organizing service delivery, managing civil servants, and much more. The bias is even reflected in views that governments should be transparent and non-corrupt and have merit based hiring procedures. I am sure we all want to be in governments that look like this, but do appearances matter as much as action? And do these appearances always promote the action needed from governments, especially in developing countries?

In promoting a form based governance agenda (of what we want states to look like), I think we (as a community of governance observers) often forget that governments exist to do and not just to be. We thus focus on the means of being rather than the product of doing. This bias leads to governance indicators and reforms that emphasize perfection of means, often failing to make a connection to the ends or even clarifying which ends matter. (Or allowing for the idea that different ends might matter in different places at different times or that different ends might justify and even warrant different means in different countries or even sectors in countries). This is particularly problematic in developing countries where governments are only five or six decades old and are still defining and creating their ends and their means. Approaches to governance should help in this process of defining and re-defining, but this help should start by emphasizing ends—what governments need to do to promote development for citizens—and then think about means—how governments could do such things.

The tension between ends and means in the governance discourse

Slide1

The first section of this new paper makes the argument for focusing on ends and then means. The second and third provide details on the ends and means I typically look at; identifying seventy of these to get as full a picture of governance as possible. A fourth section then discusses why I do not use stand-alone, hold-all indicators of governance to present this picture. The next section introduces my own way of actually looking at and analyzing governance data: Using comparative, bench-marked dashboards and narratives instead of stand-alone indicators. I build a dashboard example to show how it allows a view on the multi-dimensional nature of governance and fosters a conversation about strengths, weaknesses and opportunities in specific countries.

Here is what it looks like. For details you will have to read the paper, but the idea is that one picture—made up of 70 pieces of data—illustrates how a specific government compares with others on important ends and means. The variation in colors reflects the variation in governance characteristics and performance, which is commonly evident in countries. Hold-all governance indicators commonly fail to show this variation, averaging it out instead of revealing it as key to getting a full picture of the governance situation in any state.

Example of a Dashboard

Slide1

Slide2

I don’t intend for this to be an academic treatise, but offer it as my personal viewpoint on an increasingly important topic. Think of it, perhaps, as an exercise in ‘thinking out loud’. As such, the paper is a cover-all piece on my views about this subject to date, which one will see in the references to my work, including articles, blog posts, formal figures and tables and less formal cartoons. Hopefully the totality of this work provokes some thinking beyond my own. In particular, I aim to contribute to the discussion about including governance indicators in the post 2015 development indicator framework. The final section of this paper offers specific ideas in this regard, intended to build on already-important contributions.