Getting real about development; It is hard

Written by Matt Andrews

I’m reminded so regularly that development is about change. If it’s done well it is about change that sticks, and even more about countries becoming adaptive (able to change continuously at the right pace and in the right way).

This requires learning and building a specific type of DNA in people, organizations, and countries. And this learning is hard. Often because learning is perceived as failure, and failure is feared.

The truth is that most key development breakthroughs happen out of the lessons of things gone wrong, but in the moment of going wrong it is hard to see how valuable the failure is; it seems like all is falling apart and critics come out of every window and door they can.

Keeping one’s head in these moments is crucial, and is required to let people see failure as learning and to see that learning itself is the key to success.

I wonder how often public policy schools teach students about these moments and how to manage yourself in the face of the turmoil these moments involve. I think this may be one of the most important lessons to learn if you want to work in development and not spend all the time writing safe reports no one uses or consult from a distance, or do stuff without bringing local folks along to learn how to do it themselves.

EEP/Shiree: Using adaptive programming to monitor change in Bangladesh

written by Salimah Samji

How do you effectively monitor an 8 year, £83.5 million (around USD$135 million) challenge fund that partners with NGOs to improve the livelihood of 1 million beneficiaries? A daunting task indeed.

The Economic Empowerment of the Poorest (EEP/Shiree) program is a partnership between the UK Department for International Development (DFID), the Swiss Agency for Development and Cooperation (SDC) and the Government of Bangladesh (GoB), whose objective is to lift 1 million people out of extreme poverty by 2015. The fact that it is a challenge fund that has managing partners, consortium academic partners and NGO partners, all with many moving pieces, made it crucial to have agile decision making tools in order to respond to the needs of their beneficiaries in real-time. Traditional M&E methods of baseline, midterm and endline surveys were deemed insufficient.

The need for real-time measures and iterative decision making created the space for experimentation and innovation. Armed with authorization from their donors, the EEP/Shiree team set out to explore, experiment and create Change Monitoring System 2 (CMS 2). There are a total of 5 CMS tools which include in-depth life histories. They crawled the design space to find and fit solutions that would work in their context (pilot-test-adapt-iterate). Here is a summary of the three pilot phases:

  • Phase 1: Optical reader technology: They first created a simple survey for the NGOs partners to administer and fill out. The surveys were then digitally scanned. They quickly learned that this was too cumbersome a process and it took 2-3 weeks to receive the surveys. The time-lag was too long, they needed something more efficient.
  • Phase 2: Java enabled phone: Since mobile penetration is high, they partnered with mPower to develop a ten minute, monthly census survey on the phone. They equipped up to 20 field officers (the front line personnel who work at the field level with beneficiary households) with simple mobile phone devices that used the Bangla script for the survey. It was meant to be a 6 month pilot but it lasted for 1.5 years by which time they had scaled to 100 devices, with surveys and simple visualization. Convincing NGO partners as well as the visualization and the development of an in-house feedback loop mechanism took much longer than had been anticipated.
  • Phase 3: Android smart phone: The dropping costs of smart phones in the market (android phones were $60-70) created a lucrative option. The smart phone allowed greater flexibility (field staff just update the app on their phone), more functionality and accountability (GPS location of households, photos and voice recording verify that the beneficiaries are being met regularly). mPower also built a dashboard that allowed the comparison and served as a litmus test to identify red flags that required further investigation, ultimately allowing the NGO partners and EEP/Shiree to tailor recovery plans to the beneficiaries needs and changing context.

Slide1

After the trial-and-error and incremental adjustments over three pilot phases, EEP/Shiree deployed a full roll out of the system towards the end of 2012 (3 years later) with the use of smart phones. EEP/Shiree project partners have over 700 smart phones equipped with an Android operating system, internet connectivity and GPS capability, and have been monitoring over 100,000 households every month across Bangladesh as well as accessing information through an online visualization dashboard that is updated in real time.

Here are some of the challenges they faced:

  • Bringing NGO partners on board: The NGO partners were reluctant and viewed the collection of data as an imposition from above. Asking, “why do we have to do it?” and saying “we don’t have time.” They did not understand that the data and the dashboard could serve as a management tool for themselves. NGO partners were then involved in the design of the questions and were included in the process. It took approximately eight months for data collection to cross the 100,000 per month mark which has since been consistently met and represents most households.
  • Infrastructure constraints: Accessing the dashboard from some areas still face connectivity issues. De jure, every field officer is supposed to visit once a month but de facto not all of them do. The sheer scale of the program makes it physically difficult to monitor. While changing the survey questions is easy – you just download the new form on your phone – the back end dashboard change costs are high. Furthermore, by changing questions you lose the ability to compare across time.
  • Effective use of existing data: While the data is used to respond to the needs of the beneficiaries, very little predictive/trend analysis is done. The data is not used to challenge assumptions of what works and to continuously refine their understanding of the dynamics of ascents out of and descents into extreme poverty. This is partly because no one is responsible for this task and so it doesn’t get done.

Complex problems do not have clear solutions. The fact that the donors were flexible and created the space for experimentation and innovation allowing several pilots to be tested (all with good reasoning) is commendable. Throughout the process, EEP/Shiree and mpower co-designed CMS 2 and their continuous cycle of partnership led to a virtuous cycle of action. The leadership on both sides meet every 2-3 months to discuss what is working and what is not, which helps adapt process to technology and technology to process. Together they built a dynamic monitoring tool, proving that this can be done at scale. This is a far cry from the usual case of consultant comes, builds an MIS system and then leaves.

The PDIA Anthem

Need help decoding the acronym PDIA? Check out the PDIA anthem.

 

This Anthem uses the Instrumental from Mos Def – Mathematics. It was made by a very talented student as part of an assignment for Matt Andrews course entitled Getting Things Done in Development. We had never imagined that we could write a song about PDIA, let alone a rap. Thank you.

Let me hear you say P. D. I. A.

World Bank uses PDIA in Sierra Leone

written by Salimah Samji

International development experts often tell us that they cannot do PDIA because the project processes within their organizations do not allow for flexibility. The truth however, is that all development agencies have some sort of instrument that does allow for experimentation and flexibility. Here’s an example of how a Pay and Performance project in Sierra Leone explicitly used PDIA principles.

Civil service reforms are complex in and of themselves. If you add, a lack of capacity to implement programs, multiple reporting lines, demoralized civil servants, a lack of coordination amongst key agencies, and a low-level of trust, the potential for success of such a reform decreases significantly. Recognizing this, the World Bank team decided to use the key principles of the PDIA framework with support from the Leadership for Results (LforR) program for their Pay and Performance Project in Sierra Leone. The rationale for this was to bring a broad range of stakeholders together and facilitate a process of collective problem and solution identification, as well as to introduce experimentation and adaptability during implementation.

They began with some short-term results-focused Rapid Results Initiatives (RRIs) in Year 0 and Year 1. The pilot was instrumental in building the confidence of the local civil servants by demonstrating that progress was possible in their context and gave them a sense of ownership. In addition, the short feedback loops facilitated rapid experiential learning about what results were actually achieved for both government and the World Bank staff – in PDIA terminology, we call this strategically crawling the design space.

Specifically, they used a two-pronged, learning-by-doing process, which included:

  1. Structured team coaching throughout the implementation process: A locally based rapid results coach who had an in-depth understanding of government and public sector reform was hired to provide support to teams on a daily basis. The coach:
    • Facilitated problem solving at multiple levels in the system with team-level work,
    • Helped create action plans by breaking a huge daunting task into smaller easier to digest chunks,
    • Motivated the teams despite the challenges, and
    • Created an opportunity for the teams to learn from each other and to see how their work fit within the larger picture.
  1. Facilitated leadership fora for dialogue: One-day strategic leadership convenings between leaders and implementation teams were held at critical points. These retreats served to review progress and learning, problem-solve, facilitate reflection, make strategic decisions, and course-correct where needed. In PDIA terminology, we call this maintaining the authorizing environment.

After 20 months of implementation (February 2014), they had several hard results. More importantly, there was stronger inter- and intra-agency collaboration and increased trust and communication. The teams actually had the capacity to do things themselves. The flexibility at the design stage allowed more politically and technically feasible solutions to emerge.

So, large bureaucracies can do PDIA and it doesn’t take forever. Bottom-line: the mundane matters and cannot be ignored for a project to succeed.

Roberto O. Panzardi and Kay Winning are in the process of publishing a paper with more details on this project. You can read about the preliminary results here.

 

What is Action Learning?

written by Matt Andrews

Action learning is a key part of PDIA. It is “a hybrid technique that allows participants to use what they learn to tackle priority problems within their companies under actual work conditions. Action learning is a social process for resolving the difficulties managers increasingly confront, where history offers no solution.

At its heart, action learning is a systematic process that increases participants’ organizational learning in order to help them respond more effectively to change. Originated by Reg Revans (1983), action learning is based on the underlying premise that there is no learning without action and no action without learning. Action learning is inextricably linked with action science. Action science (Argyris, Putnam, and Smith, 1985) provides a conceptual framework and a methodology for facilitating action learning, while Revan’s work establishes the actual form. The following processes of action science are implicit in action learning:

  • Critical reflection: bringing underlying assumptions to consciousness; testing those assumptions to determine if they are appropriate for attaining the desired goal
  • Reframing: altering assumptions that don’t accomplish desired goals
  • Unlearning and relearning: developing new sets of learned skills based on reframed assumptions; replacing old with new skills until new ones are automatic.

Action learning methodology has three main elements:

  1. Problems that people identify;
  2. People who accept responsibility for taking action on a particular issue; and
  3. Colleagues who support and challenge one another in the process of resolving the problems.

Using real tasks as the vehicle for learning, individuals, groups, or teams develop management and leadership skills while working on organizational problems and testing their assumptions against real consequences. By taking a real problem, analyzing it, and implementing solutions derived with colleagues, individuals monitor results and can be held accountable for their actions. Revans believes that if we are to cope with accelerating and turbulent change, then we must place our confidence in the lived experiences and insights of others in order to be successful.” from Experiential Learning, Past and Present  Lewis, L.H. and Williams, C.J. (1994).

Reflection Graphic

Common Core Math: when the how undermines the what

written by Salimah Samji

Without the how, the what remains fiction — often compelling fiction. Development is littered with examples of projects/reforms that have failed because no one systematically thought through how the project/reform would actually be implemented given the local capacity and context. The common assumption is that if you design a technically sound project then implementation will magically happen by itself. Others believe that implementation happens by edict. The reality is that the mundane, while ordinary, banal and boring, can be the key to getting things done in development.

Elizabeth Green in Sunday’s New York Times Magazine makes a similar argument about the Common Core math standards — the new math, in the absence of new teaching, will lead to failure. The traditional approach to teaching math which involves memorizing lists of rules, does not work. It turns out, we already know this and attempts to find better ways to teach math can be traced back to the 1800s, with the most recent efforts in the 1960s and 1980s.

The key problem is numeracy – the mathematical equivalent of not being able to read. Green’s research finds that America ranks in the bottom 5 of 20 countries in numeracy (a 2012 study comparing 16-65 year olds), and on national tests, approximately 67% of 4th and 8th graders are not proficient in math. Clearly all the past attempts of trying to teach “new math” have failed. In order for the latest version of new math to be successful, the teachers need to fully understand the new standards. They need training and support. In practice however, “training is still weak and infrequent, and principals — who are no more skilled at math than their teachers — remain unprepared to offer support. Textbooks, once again, have received only surface adjustments.”

Japan, has been very successful in implementing a similar approach to the Common Core. Green highlights that the teachers depend on jugyokenkyu or lesson study to perfect their teaching skills. This process includes planning a lesson, teaching it in front of an audience of students and other teachers, followed by a discussion of what worked — experiential learning with very tight feedback loops. The best discussions the Japanese teachers had were the most microscopic, minute-by-minute recollections of what had occurred, with commentary … essentially, the mundane!

Changing standards alone is not enough to create or sustain change. There is a need to address the existing delivery infrastructure, to build capacity and to allow for local experimentation, learning, iteration and adaptation. This is a process which takes time and cannot be done overnight, but it has the greatest chance of success.

If you are interested in learning more, read Escaping Capability Traps through Problem Driven Iterative Adaptation (PDIA). You can also watch our BSC video series.

BSC video 31: Crawling together in Cambodia

Everyone agrees that building the rule of law is important. But building the capability of a justice system is a long and difficult process, often susceptible to isomorphic mimicry. In this video, Michael Woolcock, uses an example of legal systems in Cambodia to illustrate how the arbitration council had to learn how to negotiate together through a process that built collective capability and legitimacy. You can watch the video below or on YouTube.

You can learn more about the Cambodia Arbitration Council here, and read more about opening spaces for reform in Cambodia and Indonesia here.