Building capability: the true success of PDIA

written by Anisha Poobalan

The PDIA team has been working in Sri Lanka for the past six months with five talented and motivated government teams. This work is challenging and demands hard work by government officials and yet through short, repeated iterations, real progress is achieved. The teams update a facilitator every two weeks while also preparing for their next two week ‘sprint’. Once a month, the teams meet together at a ‘Launchpad’ session to update each other, evaluate their progress, adapt their action plans accordingly and set out for the next month of hard work. I have the privilege of sitting in on team meetings every week. This work takes time, it takes perseverance and it requires trust, and the task of attacking some of the most challenging areas in government is frustrating but absolutely worth it with each breakthrough. While impossible to articulate completely, this post attempts to reflect the ground reality of practicing PDIA in order to build state capability.

Emergence, in complexity theory, is the process by which lessons learned from new engagements and activities lead to a unique recombination of ideas and capabilities that result in unpredictable solutions. Emergence is evident in each PDIA team. For example, as one team made progress on their problem, they identified a constraint that needed to be addressed if they were to succeed. Another team had a similar realization and eventually the idea for a potential solution cropped up and an entire team was formed around it. As one of the team members noted, the more we engage, the more opportunities arise and connections are made and we will get lucky soon!

As the teams prepare for their lucky moment and produce tangible products, the individual capability built is the true success of this work.  As one team leader said, ‘We haven’t done something like this in the 30 years I have been [working] here!’ At the first launchpad session, a team member told me about experiences they had had at similar workshops. ‘We meet and discuss various topics and then leave. But I think this will be different, we must actually do something.’ Faced with a new challenge, undertaking a task he had no experience in, this member is now an expert and motivates the others along. From the onset, he has been determined to achieve his targets and has proven to the rest in that team, that hard work and genuine interest can lead to unexpected, impressive learning and results.

Another team member, an experienced yet skeptical team leader, did not leave the first launchpad session quite as confident. She didn’t believe this work would lead to real results and doubted they would have the necessary political backing. A few months later, she is now the most motivated, engaged, focused member on his team. ‘So many people come to collect information, then they put down their ideas in a document and give it to us to act on. This just ends up on a shelf. It’s better not to talk, but to do something – so we are happy! Especially the support from the higher-level authorizers has given us confidence to keep working’. This team embarked on a journey from confusion to clarity. They had to trust this approach, take action and gradually fill the information gaps they did not even know existed a few months before. It has been frustrating, and yet they continue in good faith that with each piece of information gathered they are closer to a clear, achievable vision for their project. The capability to create project profiles like this has grown in this team and will be useful to their colleagues across government. These capabilities are the results of hard work, intentional engagement, and consistent expansion of authority.

Some people ask, ‘So what makes a good team? What departments should they come from or what expertise should they have?’ My answer to that is simple: a successful team comprises of people who are willing to work; government officials willing to trust a completely new approach and work hard. Hard working teams are essential to the success of PDIA and while expertise, seniority, and experience may be considered necessary, without genuine hard work, any team, no matter how talented, will fail. Here in Sri Lanka, each team is unique, with varying weaknesses and strengths they have learned to work around. Some teams lack strong leadership which forces team members to take greater responsibility and ownership in decision making and motivation. Other teams have strong leadership so some members took on less responsibility and at points didn’t contribute at all to achieving the teams’ goals. Some teams have capable workers frustrated by their lack of authority, and others have the authority but lack capability. There are teams that perform well with organized deadlines and targets, while others struggle to set deadlines beyond the coming week. Each team’s composition has adapted as the work evolved, and each team has achieved great things through their diverse skill sets, past experience, commitment to real work and time-bound action.

I hope these field notes help give a sense of what PDIA is like on the ground and how this approach, although difficult and emotionally draining, can lead to new, or make use of latent, capabilities within government.

If you are interested in learning more about the Sri Lanka work, you can read the targeting team working paper.

Initiating action: The action-learning in PDIA

written by Matt Andrews

I recently wrote a blog in response to a question I was asked by a colleague about how we move from the foundation or framing workshop in PDIA processes—where problems are constructed and deconstructed—into action, and beyond that, action learning. In this post I will offer some ideas on how we do that.

First, we push teams to action quickly: We ensure that the teams working in the framing workshops can identify clear next steps to act upon in the days that follow the workshop. These next steps need to be clear and actionable, and teams needs to know that action is expected.

Second, we don’t aim for ‘perfect next steps’—just action to foster learning and engagement: The steps team identify to start with often seem small and mundane, but our experience indicates that small and mundane steps are the way in which big and surprising products emerge. This is especially the case when each ‘next step’ yields learning (with new information, and experiential lessons) and expands engagement (with new agents, ideas, and more). This is because the problems being addressed are either complicated or complex, and are addressed by expanding engagement and reach (which fosters coordination needed to confront complicated problems, and interaction vital to tame complexity) and leads to learning (which is crucial in the face of the uncertainty and unknowns that typify complex problems).

Third, we create time-bound ‘push periods’ for the next step action assignments:  After the framing workshop, the PDIA process involves a set of action iterations where teams go away and take the ‘next step’ actions they identify, agreeing to meet again at a set date and time to ‘check-in’ on progress. Each iteration is called a ‘push period’ in which team members push themselves and others to take-action and make progress they otherwise would not.

Fourth, we convene teams for ‘check-ins’ after their push periods, and ask questions: The team reassembles after the push period, with PDIA facilitators, at the ‘check-in’ date—and reflects on four questions: ‘What was done? What was learned? What is next? What are your concerns?’ Note that the questions start by probing basic facts of action (partly to emphasize accountability for action, and also to start the reflection period off with a simple report—a basic discussion to precede deeper reflection, which often needs some context). We then ask about ‘what was learned’, where we focus on procedural and substantive lessons (about all their experiences—whether frustrating or inspiring), and learning about the context.

Fifth, facilitating learning requires nudging and pushing: We find that you often need to push participants to ask deep questions about their lessons.

  • For instance, someone may say “we tried to get Mr X to work with us, and he did not respond positively, so we learned that he does not want to work with us.”
  • We would follow up by asking, “why do you think Mr X did not respond?”
  • Often this leads to a new set of questions or observations about contexts in which work is being done (including, very importantly, the politics of engagement). In the example, for instance, the ‘why’ question raised discussion about how people engage in the government (and if the team reached out to Mr X in the right manner) and the politics of the context (the interests of Mr X and how these might be playing into his non-response).

This process facilitates learning by the teams and by my PDIA facilitators. Both the teams and our facilitators produce written documents (short, but written) about what was learned. Over time, we can keep coming back to these lessons to ensure everyone gains a better understanding of procedural, substantive, and context issues.

As a note: People often ask where we address ‘politics’ in PDIA. That requires another blog post, but hopefully you see, in the description here, the basic process of what we call Political Economy Engagement (PEE), which we prefer to Political Economy Analysis (PEA). The action steps in PDIA always involve pushes into—or engagements with-the context and yield responses that allow one to learn about politics (who stands where, who has power, how it is exercised, etc.)

Finally, we push teams to the next steps quickly, again—which is where they ‘adapt’: You will notice that the last two questions we ask are about next steps and issues to address in taking these steps. We do not let teams get bogged down by tough lessons, but push them to think about what they can do next, adapting according to the lessons they have learned; we focus on what is important and what is possible ‘next’, given what has been learned; and we try to build and maintain momentum, given the belief that capability and results emerge after accumulated learning and engagement from multiple push periods.

In conclusion, When considered as one full iteration, the blend of programmed action with check-in questions and reflection is intended to foster action learning and promote adaptation and progress in solving the nominated problems.  The combination of learning while producing results (through solving problems) is key to building new capability.

 

Screen Shot 2017-04-11 at 9.14.34 PM

Some linkages to theory, literature and management practice

  1. Why we focus on learning and engagement in this process: In keeping with complexity theory, the principle idea is that action leading to novel learning and engagement and interaction fosters emergence, which is the key to finding and fitting solutions to complex problems. Further in keeping with theory, the idea here is that any action can foster learning, and it is thus more important to get a team to act in small ways quickly than to hold them away from action until they can identify a big enough (or important enough) next step.
  2. Why we refer to ‘push periods’: The Scrum version of agile project management processes have similar time-bound iterations, called Sprints, which are described as ‘time-boxed’ efforts. We refer to ‘push-periods’ instead of sprints, partly to reflect the real challenges of doing this in governments (where CID focuses its PDIA work). Team members are pushing themselves to go beyond themselves in these exercises, and the name recognizes such.
  3. How we draw on action learning research, and our past experiments: Our approach builds on PDIA experience in places like Mozambique, Albania and South Africa, which has attempted to operationalize action learning ideas of Reg Revans (1980) and recent studies by Marquardt et al. (2009). These combined efforts identify learning as the product of programmed learning (which everyone is probably familiar with, and is often provided through organized training), questioning, and reflection (L=P+Q+R), which the PDIA process attempts to foster in the structure of each iteration (with action to foster experience, a check-in with simple questions about such experience, and an opportunity for reflection—facilitated by an external ‘coach’ figure). The questions asked in the PDIA check-in are much more abbreviated than those suggested by Revans and others, largely because experience with this work in busy governments suggests that there are major limits to the time and patience of officials, and asking more questions can be counter-productive (and lead to non-participation in the reflection process). The questions posed to teams are thus used to open opportunities for additional questions: like ‘who needed to be engaged and was not?’ or ‘why did you not do what you said you would?’ or ‘what is the main obstacle facing your team now?’ As the team progresses through iterations, they start to ask these more specified questions themselves, and come into the check-in reflection session with such questions in their own minds.

If you are interested in reading the Sri Lanka working paper, you can find the full version here

Active and adaptive planning versus set plans in PDIA

written by Matt Andrews

A colleague asked me two questions in response to last week’s blog on initiating PDIA:

  1. “It does not sound like you develop a thorough plan for action. Is this correct?”
  2. “How do you move from the workshop to action, and particularly to action learning?”

I will reflect on these questions in future blog posts, but today I will only address the first one.

It is probably easiest to say that we emphasize planning instead of plans when doing PDIA, where the former is about a process of engaging around a problem and the latter is simply on developing a documented step-by-step strategy.

We believe that planning allows for learning and interaction, by those who will do the work, and this is immensely useful.

We also see planning as an ongoing process, that is active and adaptive and does not happen in one moment or manifest in one document.

That is not to say we do not start with a defined planning exercise. We do.

The planning process is initiated in what we call the initial Launchpad event, which I described in the last blog posting. It occurs in about a day, where we (the in-country PDIA facilitators) work with a number of internal teams addressing festering problems in their governments.

The internal teams do the work at these events, and our PDIA folks just facilitate the process, taking the teams through aggressive sessions of constructing and deconstructing problems, identifying entry points for action, and actual action steps (as described in my prior blog and in the early sections of our new working paper on one of the teams in Sri Lanka).

This initial planning activity does generate a one page action agenda (or plan of sorts), which is intentionally short and simple. It includes the following information:

  • a description of the problem, and why the problem matters,
  • the causes of the problem,
  • what the problem might look like solved (and especially what this kind of result would look like in the time period we are working with—usually 6 months),
  • what the ‘indicative’ results might involve at the 4 month and 2 month periods (working backwards, we ask ‘where would you need to be to get to the 6 month ‘problem solved’ result?),
  • fully specified next steps (where the teams identify what they will do in the next two weeks and what they plan to do in the two weeks after that), and
  • what is assumed in terms of authorization of the next steps, acceptance of these next steps, and abilities to do the next steps (we want teams to specify their assumptions so we can track learn where they are right and wrong and adapt accordingly in future steps).

This kind of content will be familiar to those who know about our Searchframe concept. This content is pretty much the basis of the Searchframe. We don’t often have teams build the Searchframe itself, but it is a heuristic that allows us to work with some type of ‘plan’ but one that is not overly prescriptive and limiting.

This initial planning exercise is only the start of the work, however, and “you never end up where you start in PDIA.”

The exercise is the key to getting started, and its main goal is to empower a quick progression to action. Thus, the most important thing is the listing of key action steps to take next, and a date to come back and ask about how those action steps went, what was learned, and what will be next.

A ‘check in’ event data is usually set about two or three weeks in the future, and we inform the teams that they will be involved in a ‘push period’ between events. (This is similar to a ‘sprint’ in agile processes, but we think the idea of ‘pushing yourself and your organization’ is more apt in the governments we work with, so we call it a ‘push period’). The two to three week push period length provides enough time for teams to act on their next steps but also creates a time boundary necessary to promote urgency and build momentum.

The team meets in-between the events, and works to take the steps they ‘planned’. Then they meet at the check in and answer a series of questions: What did you do? What did you learn? What are you struggling with? What is next?

The answers to these questions provide the basis of learning and adaptation, and allow the teams to adjust their assumptions, next steps, and even (sometimes) expectations. They do this iteratively every few weeks, often finding that their adaptations become smaller over time (as they learn more and engage more, they become more sure of what they are doing and more clear about how to do it).

As such, the initial planning exercise is not the main event, and the initial ‘plan’ is not the main document—or even the final document. The event is just the start of an iterative action-infused planning process, where a loose plan is constantly being adapted until one knows where to go.

From this description, hopefully it is clear that we do foster planning and even a plan, but in ways that are quite different to common approaches to doing development:

  • The work is done by the internal teams, not external partners (consultants or people in donor agencies). We as the external PDIA facilitators may nudge thinking in some directions during the process, but the work is not ours. This is because ownership is a real thing that comes by owners doing the thing they must own, and when outsiders take the work from the owners it undermines ownership.
  • The initial planning exercise and one page plan is not perfect, and is often not as infused with evidence and data as many development practitioners hope it to be. I am an academic and I believe in data and evidence, but in the PDIA process we find that government teams often do not have the data or evidence at hand to do rapid planning, or they do not have the capability to use such. Waiting on data to develop a perfect plan slows the process of progressing to action and getting to the learning by doing. We have thus learned that it is better to not create a huge ‘evidence hurdle’ at the initial stage. We find that, where evidence and data matters, the teams often steer themselves to action around data sourcing, analysis etc. beyond planning. This allows them to find and analyze evidence as part of the process, and build lessons from such into their process.

[A note here is important, given that some recent commentaries have placed ‘using evidence’ and even using ‘big data’ as central to the PDIA process and Doing Development Differently. I love data, and think that there is a huge role for using evidence and big data in development, but I do not see how it is a central part of PDIA. Hardcore evidence based policymaking and big data analysis tends to depend on narrow groups of highly skilled insiders (or more commonly outsiders), the availability of significant amounts of data, and a fairly long process of analysis that yields—most commonly—a document as a result. These are not the hallmarks of effective PDIA and I would caution against making ‘access to evidence’ or ‘using Big Data’ as key signs that one is doing PDIA].

I hope this post has been useful in helping explain our thoughts on plans versus planning. Next week I will describe how we do the iterations in a bit more detail. Remember to read our book on these topics (it is FREE through open access) and read the new working paper on PDIA in action in one of the teams in Sri Lanka. We will produce more of these active narratives soon.

 

Initiating PDIA: Start by running…and then run some more

written by Matt Andrews

“Once there is interest, how do you start a PDIA project?”

Many people have asked me this question. They are often in consulting firms or donor agencies thinking about working on PDIA with host governments, or in some central bureau in the government itself.

“We have an authorizer, know the itch that needs scratching (the problem), and have a team convened to address it,” they say. “But we don’t know what to do to get the work off the ground.”

I ask what they would think of doing, and they typically provide one of the following answers:

“We should do research on the problem (the itch)” or “We should hold a multi-day workshop where people get to analyze the problem and really used to a problem driven approach.”

I have tried starting PDIA with both strategies. Neither is effective in getting the process going.

  • When outsiders (donors, academics, or even central agencies responsible for making but not implementing policy) do the primary research on ‘the problem’, their product is usually a report that sits on shelves. If you start with such a product it is hard to reorient people to change their learned behavior and actually use the report.
  • When you hold an elaborate workshop, using design thinking, fancy analysis, or the like, it is very easy to get stuck in performance—or in a fun and exciting new activity. We find people in governments do attend such events and have fun in them, but often get lost in the discussion or analysis and stay stuck in that place.

Having tried these and other strategies to initiate PDIA interventions, we at Harvard BSC have learned (by doing, reflection, and trying again…) some basic principles about what does not work in getting started, and what does work. Here are a few of these findings:

  • It does not work when outsiders analyze the problem on behalf of those who will act to solve it. It works when those in the insider PDIA teams construct and deconstruct the problem (whether they do this ‘right’ or ‘wrong’). The insiders must own the process, and the outsiders must ‘give the work back’ to the rightful owners.
  • It does not work to stage long introductory workshops to launch PDIA processes, as participants either get frustrated with the time away from work or distracted by the workshop itself. Either way they get stuck and the workshop does not mobilize their action. It works if you convene teams for short ‘launchpad-type events’ where they engage rapidly and move as rapidly to action (beyond talk). We are always anxious to move internal PDIA teams to action. The meetings are simply staging events: they are not what ‘doing PDIA’ is actually about.

Acting on these principles, we now always start PDIA running.

We bring internal teams together, and in a day (or at most a day and a half) we ‘launch’ through a series of sessions that (i) introduce them to the PDIA method, (ii) have them construct and (iii) deconstruct their problems, (iv) identify entry points for action, and (v) specify three or more initial practical steps they can take to start addressing these entry points. At the end of the session they go away with their problem analysis and their next step action commitments, as well as a date when they will again meet a facilitator to discuss their action, and learn by reflection.

This is a lot to get done in a short period. This is intentional, as we are trying to model upfront the importance of acting quickly to create the basis of progress and learning. We use time limits on every activity to establish this kind of pressure, and push all team members to ‘do something’, then ‘stop and reflect’, and then do the next thing.

When we get to the end of each Launchpad event, the internal teams have their own ‘next step’ strategies, and a clear view that the PDIA process has now started: they are already running, and acting, and engaging in a new and difficult space. And they know what they need to do next, and what date in the near future they will account for their progress, be asked about their learning, and pushed to identify more ‘next steps’.

When I tell interested parties in donor agencies, consulting firms, etc. about our ‘start by running’ approach, they have a number of common responses:

“It does not sound like anyone is doing a proper diagnosis of the problem: what happens if the team gets it wrong?”

“What happens if the team identifies next steps that make no sense?”

“This strategy could be a disaster if you have the wrong people in the room—who don’t know what they are doing or who have a biased view on what they are doing…”

These concerns are real, but really don’t matter much in the PDIA process:

  • We don’t believe that initial problem diagnostics are commonly correct when one starts a program (no matter how smart the researchers doing the analysis).
  • We also don’t believe that you commonly identify the right ‘next steps’ from a study or a discussion.
  • And we also don’t believe that these kinds of processes are ever unbiased, or that you commonly get the right people in the room at the start of a process.

We don’t believe you address these concerns by doing great up front research. Rather, we aim to get the teams into action as quickly as possible, where the action creates opportunity for reflection, and reflection informs constant experiential learning—about the problem, past and next steps, and who should be involved in the process. This learning resides in the actors involved in the doing, and prompts their adaptation. Which leads to greater capability and constant improvement in how they see the problem, think of potential solutions, and engage others to make these solutions happen.

A final note:

When I discussed this strategy with a friend charged with ‘doing PDIA’ as part of a contract with a well-known bilateral donor, he lamented: “You are telling me the workshop is but a launching event for the real PDIA process of acting, reflecting, learning and adapting….but I was hired to do a workshop as if it was DOING PDIA. No one spoke of getting into action after the workshop.”

To this colleague—and the donors that hired him—I say simply, “PDIA is about getting people involved, and acting, and you always need to get to action fast. PDIA must start by running, and must keep teams running afterwards. Anything that happens one-off, or that promotes slow progress and limited repeated engagement is simply not PDIA.”

Learn more about initiating PDIA in practice in chapters 7 and 9 of our free book, Building State Capability: Evidence, Analysis, Action.

PDIA and Authorizers with an itch

written by Matt Andrews

“How do you decide where to work on a PDIA project?”  This is probably the most common question I have been asked with respect to PDIA.

After over 5 years of doing this work in a variety of countries and sectors, I have a simple answer: “When we find authorizers with an itch.”

“That sounds bizarre,” I hear you say. Or maybe you think I’m just being cute to fit in with a playful blogging technique.

No, authorizers with an itch are key to starting any PDIA initiative.

When I say we need counterparts with an itch, I mean that they are very aware of a problem they can’t solve. Like an itch you can’t scratch, or that you scratch again and again but to no avail. This is usually a policy problem that has come to the surface one too many times, usually where various prior reforms or policies or interventions have not provided effective solutions.

Stubborn itches create frustration and even desperation, which can create the space for doing things differently—and taking risks. PDIA needs this kind of space, and this motivating influence. Without it, we have found very little room to focus on the problem, and learn-by-doing towards a new solution.

There are downsides of working to scratch a stubborn itch. The fact that others have tried scratching it, to no avail, means that it is usually going to be ‘wicked hard’ to solve (so don’t expect an easy path to a solution). The fact that it seems to move around (sometimes itching here and sometimes there) reflects the many unseen and even dynamic factors that cause the itch itself (like nasty politics or bureaucratic dysfunction). Don’t expect these factors to go away just because you are tackling the problem with PDIA. You will hit the nastiness soon. Be ready.

When I say we need ‘authorizers’ to start, it is because the PDIA work we do is always in the public domain, where no real work (with action attached) is done without someone’s explicit authorization. The required authorizer is always, in my experience, someone inside the context undergoing change. This means the work cannot be ordered or organized or identified by an external agent (donor, consultant, or even academic).

My team at Harvard found this out the hard way. As you will read in a forthcoming working paper by Stuart Russell, Peter Harrington and I, we have experimented with PDIA initiatives where problems are identified in different ways.  We have had limited success whenever anyone from Harvard or an external entity (like a donor) has been a main identifier of the problem. In contrast, we have almost always had some success when the problem was identified by a domestic authorizer in the place undergoing change.

This is simply because the internal authorizer needs to have internal authority: at the least, to convene a group of internal people to start engaging with the problem, and beyond this to protect the PDIA process from threats and distractions. No external party can do this.

Beyond convening authority, we find that the authorizers need to provide three types of authorization: shareable authorization (where they allow the engagement of other authorizers in the process of scratching the itch), flexible authorization (which allows for an experimental process), and patient (or grit) authorization (where one can expect some continued support in the search for an effective ‘scratch’ solution).

These are big authorization needs, and one does not know if they will be met at the start of the PDIA process. But they tend to come when authorizers face an itch (making them willing to share, adaptive in demands, and patient for a real solution).

We find, therefore, that there is enough space to initiate a PDIA initiative if we find an authorizer with an itch she cannot scratch.  That’s where we start our work, buckling our seat belts and getting ready for a journey of, and to the unexpected.

Are you in a situation where an authorizer is facing a stubborn itch? Maybe you have space to ask, “What’s the problem…and can we mobilize a team to try something different to solve it?”

Learn more about engaging authorizers around problems that matter in chapters 6 and 9 of our free book, Building State Capability: Evidence, Analysis, Action.

 

PDIA Notes 2: Learning to Learn

written by Peter Harrington

After over two years of working with the government of Albania, and as we embark on a new project to work with the government of Sri Lanka, we at the Building State Capability program (BSC) have been learning a lot about doing PDIA in practice.

Lessons have been big and small, practical and theoretical – an emerging body of observations and experience that is constantly informing our work. Amongst other things, we are finding that teams are proving an effective vehicle for tackling problems. We have found that a lot of structure and regular, tight loops of iteration are helping teams reflect and learn. We have found that it is vital to engage with several levels of the bureaucracy at the same time to ensure a stable authorising space for new things to happen. This all amounts to a sort of ‘thick engagement’, where little-and-often type interaction, woven in at many levels, bears more fruit than big set-piece interventions.

Each of these lessons are deserving of deeper exploration in their own right, and we will do so in subsequent posts. For now, I want to draw out some reflections about the real goal of our work, and our theory of change.

In the capacity-building arena, the latest wisdom holds that the best learning comes from doing. We think this is right. Capacity building models that rely purely on workshop or classroom settings and interactions are less effective in creating new know-how than interventions that work alongside officials on real projects, allowing them to learn by working on the job. Many organisations working in the development space now explicitly incorporate this into their methodology, and in so doing promise to ensure delivery of something important alongside the capacity building (think of external organizations that offer assistance in delivery, often by placing advisers into government departments, and promise to ensure a certain goal is achieved and the government capacity to deliver is also enhanced).

It sounds like a win-win (building capabilities while achieving delivery). The problem is that, in practice, when the implementers in the governments inevitably wobble, or get distracted, or pulled off the project by an unsupportive boss (or whatever happens to undermine the process, as has probably happened many times before), the external advisors end up intervening to get the thing done, because that’s what was promised, what the funder often cares more about, and what is measurable.

When that happens, the learning stops. And the idea of learning by doing stops, because the rescue work by external actors signalled that learning by doing—and failing, at least partially, in the process—was at best a secondary objective (and maybe not even a serious one). Think about anything you have ever learned in your life – whether at school or as an adult. If you knew someone was standing by to catch a dropped ball, or in practice was doing most of the legwork, would you have really learned anything? For the institutions where we work, although the deliverable may have been delivered, when the engagement expires, nothing will have changed in the way the institution works in the long run. This applies equally, by the way, to any institution or learning process, anywhere in the world.

The riddle here is this: what really makes things change and improve in an institution, such that delivery is enhanced and capability to deliver is strengthened? The answer is complex, but it boils down to people in the context doing things differently – being empowered to find out what different is and actually pursue it themselves.

In pursuing this answer, we regularly deploy the concept of ‘positive deviance’ in our work: successful behaviors or strategies enabling people to find better solutions to a problem than their peers, despite facing similar challenges and having no extra resources or knowledge than their peers. Such people are everywhere, sometimes succeeding, and depending on the conditions sometimes failing, to change the way things work – either through their own force of will, or by modelling something different. Methods to find and empower positive deviants within a community have existed for many years. But what if, by cultivating a habit of self-directed work and problem solving, it was possible to not just discover positive deviants but create new ones?

Doing things differently stems from thinking differently, and you only think differently when you learn – it’s more or less the definition of learning. Looked at this way, learning becomes the sine qua non of institutional change. It may not be sufficient on its own – structures, systems and processes still matter – but without a change in paradigm among a critical mass of deviants, those other things (which are the stuff of more traditional interventions) will always teeter on the brink of isomorphism.

We believe that positive deviance comes from learning, especially learning in a self-directed way, and learning about things that matter to the people doing them. If you can catalyse this kind of learning in individuals, you create a different kind of agency for change. If you can go beyond this and catalyse this kind of learnings in groups of individuals within an institution or set of institutions, and create a sufficiently strong holding space for their positive deviance to fertilise and affect others, then gradually whole systems can change. In fact, I’d be surprised if there’s any other way that it happens. As Margaret Mead put it, “Never doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.”

This is our theory of change. The methods we use – particularly the structured 6-month intensive learning and action workshop we call Launchpad – are trying above all to accelerate this learning by creating a safe space in which to experiment, teach ideas and methods that disrupt the status quo, and create new team dynamics and work habits among bureaucrats. By working with senior and political leaders at the same time, we are trying to foster different management habits, to help prevent positive deviance being stamped out. In doing all this, the goal is to cultivate individuals, teams, departments and ultimately institutions that have a habit of learning – which is what equips them to adapt and solve their own problems.

This does not mean that the model is necessarily better at achieving project delivery than other methods out there, although so far it has been effective at that too. The difference is that we are willing to let individuals or even teams fail to deliver, because it is critical for the learning, and without learning there is no change in the long term. Doing this is sometimes frustrating and costly, and certainly requires us gritting our teeth and not intervening, but what we see so often is agents and groups of agents working their way out of tricky situations with better ideas and performance than when they went in. They are more empowered and capable to provide the agency needed for their countries’ development. This is the goal, and it can be achieved.

 

 

PDIA: It doesn’t matter what you call it, it matters that you do it

written by Matt Andrews

It is nearly two years since we at the Building State Capability (BSC) program combined with various other groups across the developing world to create an umbrella movement called Doing Development Differently (DDD). The new acronym was meant to provide a convening body for all those entities and people trying to use new methods to achieve development’s goals. We were part of this group with our own approach, which many know as Problem Driven Iterative Adaptation (PDIA). 

Interestingly, a few of the DDD folks thought we should change this acronym and call PDIA something fresher, cooler, and more interesting; it was too clunky, they said, to ever really catch on, and needed to be called something like ‘agile’ or ‘lean’ (to borrow from approaches we see as influential cousins in the private domain).

The DDD movement has grown quite a bit in the last few years, with many donor organizations embracing the acronym in its work, and some even advocating for doing PDIA in their projects and interventions. A number of aid organizations and development consultancies have developed other, fresher terms to represent their approaches to DDD as well; the most common word we see now is ‘adaptive’, with various organizations creating ‘adaptive’ units or drawing up processes for doing ‘adaptive’ work.

‘Adaptive programming’ certainly rolls off the tongue easier than ‘Problem Driven Iterative Adaptation’!

Some have asked me why we don’t change our approach to call it Adaptive as well, others have asked where we have been while all the discussions about names and titles and acronyms have been going on, and while organizations in the aid world have been developing proposals for adaptive projects and the like (some of which are now turned into large tenders for consulting opportunities).  My answer is simple: I’ve made peace with the fact that we are much more interested in trying to work out how to do this work in the places it is needed the most (in implementing entities within governments that struggle to get stuff done). 

So, we have been working out how to do our PDIA work (where the acronym really reflects what we believe—that complex issues in development can only be addressed through problem driven, iterative, and adaptive processes). Our observation, from taking an action research approach to over twenty policy and reform engagements, a light-touch teaching intervention with over 40 government officials, an online teaching program, and more, is clear: the people we work with (and who actually do the work) in governments don’t really care for the catchy name or acronym, or if PDIA is clunky or novel or old and mainstream. The people we are working with are simply interested in finding help: to empower their organizations by building real capability through the successful achievement of results.

We thus seldom even talk about PDIA, or adaptive programming, or DDD, or agile or lean, or whatever else we talk about in our classrooms and seminar venues back at Harvard (and in many of our blog posts and tweets). Indeed, we find that a key dimension of this work—that makes it work—is not being flashy and cool and cutting edge. It’s about being real and applied, and organic, and relational. And actually quite nodescript and mundane; even boringly focused on helping people do the everyday things that have eluded them.

So, PDIA’s lack of ‘flash’ and ‘coolness’ may be its greatest asset (and one we never knew about), because it does not matter what PDIA is called…what matters is whether it is being done.