How do you know when to use PDIA?

written by Salimah Samji

We often get asked the question “why do you need to use PDIA for a problem that we already know how to solve?” The answer is simple. You don’t. If people have already crawled the design space and figured out how to solve a type of problem, then by all means, you should just apply the known solution.

We have developed two ways to help you determine whether PDIA makes sense for your problem or activity.

  • The first one asks four questions in order to determine the typology of your problem and the kind of capability required to solve it. If, for example, your activity is “implementation intensive” or “wicked hard”, PDIA might be a worthwhile. For more watch this video by Lant Pritchett or read chapter 5 of the Building State Capability book.
  • The second one looks at what capabilities exist to tackle a specific problem in a given context. We use an exercise to illustrate this whereby one is challenged to construct their journey from St.Louis, Missouri to the West coast in the United States in two different contexts. The first is the year 2015 and the second is the year 1804. The capabilities required in these two contexts are radically different, as will be the approach to solve the challenge. If your problem looks like the 1804 challenge (the lack of a map, etc.), then PDIA might be the right approach for you. For more watch this video by Matt Andrews or read chapter 6 of the Building State Capability book.

We use both of these in our PDIA online course and we have found that the visual and experiential nature of the 1804 exercise really helps drive this point home.

So you can imagine my delight when I saw that Chris Blattman highlighted both of these frameworks in his lecture notes on building state capability for his political economy of development course this week. He also wrote “This week’s lecture draws heavily on one of the most important books on development I’ve ever read: Building State Capability by Harvard’s Matt Andrews, Lant Pritchett, and Michael Woolcock.”

 

Motivating teams to muddle through

written by Anisha Poobalan

In theory, PDIA seemed like the most logical, straightforward way to go about solving a problem. A team is formed, they deconstruct the identified problem and then attack each causal area, learning and adapting as they go. Being in the field, meeting with the teams weekly, hearing about the obstacles cropping up at each turn, I realize how frustrating and discouraging this work can be. The first challenge is to get the government officials working, but then comes the task of motivating them to keep at it. The temptation to just give up and revert to the status quo grows greater with each pushback they face.

Motivation is central to this work and motivation is difficult. Each team responds to different methods of motivation at different stages on their journey. Various strategies might boost a team for a week or two before they slow down again. In the past two months, the teams were motivated by presentations to high level authorities, responsibility sheets, healthy inter-team competition, inspiring stories from successful economies, brutally honest conversations, site visits, and more. A common factor in all these strategies is the accountability it creates. Creating a culture in which mid-level civil servants are inspired, empowered and then held accountable for delivering real outputs, is necessary if they are to remain motivated.

Throughout the project, teams voiced concerns at their lack of authorization. They doubted that superiors would support their work and proposals and this demotivated them. One team worried that policy makers would not incorporate their proposals and inputs from external consultants might outweigh the teams’ findings. Another team questioned their authority to directly engage with investors and yet another team worried about their inability to influence change. Over the past two months, teams have presented and received the support of several high-level policymakers, ministries and stakeholders. Much to the teams’ surprise, their superiors are keen to expedite approvals, empower the teams, and take ownership of the proposals made. Real work led to engagement which led to authorization and this high-level support and expectation has motivated the teams beyond belief.

Inter-team meetings and synergies motivate and create accountability as well. The teams eventually understood how dependent they were on each other and success for one team meant success for the whole group. If one team was slacking or faced a road block, the output from another team may not be demanded or used to its full potential. For example, when two inter-dependent teams met for the first time, they realized that although theoretically, the output from the first team was world-class, real world experience and engagements were necessary to inform these results. That was a gap the second team had learned to and now had the capability to fill. This meeting helped link their new, or in some cases latent, capabilities. This growing interdependence has created accountability for each team to deliver. As one team continued to work, they identified a gap in the economy that would challenge their success in the future. They were overwhelmed by the severity of the problem and realized they did not have the bandwidth to address this themselves. Much to their relief however, at the next launchpad session they found that another team was already addressing this issue and the team could assure external parties that the challenge was being addressed. The team worked harder at filling this gap once they knew another group was depending on them to succeed. These are big steps in a system that lacks synergy and suffers from severe coordination failure.

Navigating the local landscape in any context is difficult, but some of these officers have struggled with repeated coordination failure for almost 30 years. This leads to frustration, discouragement and cynicism about change. One of the teams experienced this when trying to share a summary document with another government agency. They had to share this document to get support from higher level officials and expedite their work. What should have taken two days, took over two weeks. A disheartening but useful lesson, this team is learning to plan ahead, follow up and prepare for such delays in their timeline. Another team is still waiting on the approval for a document submitted around six months ago. The time and energy spent on inter- and intra-agency coordination is frustrating but the teams have made considerable progress despite the difficulties. Their persistence and continued efforts are inspiring and we hope that these notes will encourage you to persevere in your own challenging contexts.

Building capability: the true success of PDIA

written by Anisha Poobalan

The PDIA team has been working in Sri Lanka for the past six months with five talented and motivated government teams. This work is challenging and demands hard work by government officials and yet through short, repeated iterations, real progress is achieved. The teams update a facilitator every two weeks while also preparing for their next two week ‘sprint’. Once a month, the teams meet together at a ‘Launchpad’ session to update each other, evaluate their progress, adapt their action plans accordingly and set out for the next month of hard work. I have the privilege of sitting in on team meetings every week. This work takes time, it takes perseverance and it requires trust, and the task of attacking some of the most challenging areas in government is frustrating but absolutely worth it with each breakthrough. While impossible to articulate completely, this post attempts to reflect the ground reality of practicing PDIA in order to build state capability.

Emergence, in complexity theory, is the process by which lessons learned from new engagements and activities lead to a unique recombination of ideas and capabilities that result in unpredictable solutions. Emergence is evident in each PDIA team. For example, as one team made progress on their problem, they identified a constraint that needed to be addressed if they were to succeed. Another team had a similar realization and eventually the idea for a potential solution cropped up and an entire team was formed around it. As one of the team members noted, the more we engage, the more opportunities arise and connections are made and we will get lucky soon!

As the teams prepare for their lucky moment and produce tangible products, the individual capability built is the true success of this work.  As one team leader said, ‘We haven’t done something like this in the 30 years I have been [working] here!’ At the first launchpad session, a team member told me about experiences they had had at similar workshops. ‘We meet and discuss various topics and then leave. But I think this will be different, we must actually do something.’ Faced with a new challenge, undertaking a task he had no experience in, this member is now an expert and motivates the others along. From the onset, he has been determined to achieve his targets and has proven to the rest in that team, that hard work and genuine interest can lead to unexpected, impressive learning and results.

Another team member, an experienced yet skeptical team leader, did not leave the first launchpad session quite as confident. She didn’t believe this work would lead to real results and doubted they would have the necessary political backing. A few months later, she is now the most motivated, engaged, focused member on his team. ‘So many people come to collect information, then they put down their ideas in a document and give it to us to act on. This just ends up on a shelf. It’s better not to talk, but to do something – so we are happy! Especially the support from the higher-level authorizers has given us confidence to keep working’. This team embarked on a journey from confusion to clarity. They had to trust this approach, take action and gradually fill the information gaps they did not even know existed a few months before. It has been frustrating, and yet they continue in good faith that with each piece of information gathered they are closer to a clear, achievable vision for their project. The capability to create project profiles like this has grown in this team and will be useful to their colleagues across government. These capabilities are the results of hard work, intentional engagement, and consistent expansion of authority.

Some people ask, ‘So what makes a good team? What departments should they come from or what expertise should they have?’ My answer to that is simple: a successful team comprises of people who are willing to work; government officials willing to trust a completely new approach and work hard. Hard working teams are essential to the success of PDIA and while expertise, seniority, and experience may be considered necessary, without genuine hard work, any team, no matter how talented, will fail. Here in Sri Lanka, each team is unique, with varying weaknesses and strengths they have learned to work around. Some teams lack strong leadership which forces team members to take greater responsibility and ownership in decision making and motivation. Other teams have strong leadership so some members took on less responsibility and at points didn’t contribute at all to achieving the teams’ goals. Some teams have capable workers frustrated by their lack of authority, and others have the authority but lack capability. There are teams that perform well with organized deadlines and targets, while others struggle to set deadlines beyond the coming week. Each team’s composition has adapted as the work evolved, and each team has achieved great things through their diverse skill sets, past experience, commitment to real work and time-bound action.

I hope these field notes help give a sense of what PDIA is like on the ground and how this approach, although difficult and emotionally draining, can lead to new, or make use of latent, capabilities within government.

If you are interested in learning more about the Sri Lanka work, you can read the targeting team working paper.

Initiating action: The action-learning in PDIA

written by Matt Andrews

I recently wrote a blog in response to a question I was asked by a colleague about how we move from the foundation or framing workshop in PDIA processes—where problems are constructed and deconstructed—into action, and beyond that, action learning. In this post I will offer some ideas on how we do that.

First, we push teams to action quickly: We ensure that the teams working in the framing workshops can identify clear next steps to act upon in the days that follow the workshop. These next steps need to be clear and actionable, and teams needs to know that action is expected.

Second, we don’t aim for ‘perfect next steps’—just action to foster learning and engagement: The steps team identify to start with often seem small and mundane, but our experience indicates that small and mundane steps are the way in which big and surprising products emerge. This is especially the case when each ‘next step’ yields learning (with new information, and experiential lessons) and expands engagement (with new agents, ideas, and more). This is because the problems being addressed are either complicated or complex, and are addressed by expanding engagement and reach (which fosters coordination needed to confront complicated problems, and interaction vital to tame complexity) and leads to learning (which is crucial in the face of the uncertainty and unknowns that typify complex problems).

Third, we create time-bound ‘push periods’ for the next step action assignments:  After the framing workshop, the PDIA process involves a set of action iterations where teams go away and take the ‘next step’ actions they identify, agreeing to meet again at a set date and time to ‘check-in’ on progress. Each iteration is called a ‘push period’ in which team members push themselves and others to take-action and make progress they otherwise would not.

Fourth, we convene teams for ‘check-ins’ after their push periods, and ask questions: The team reassembles after the push period, with PDIA facilitators, at the ‘check-in’ date—and reflects on four questions: ‘What was done? What was learned? What is next? What are your concerns?’ Note that the questions start by probing basic facts of action (partly to emphasize accountability for action, and also to start the reflection period off with a simple report—a basic discussion to precede deeper reflection, which often needs some context). We then ask about ‘what was learned’, where we focus on procedural and substantive lessons (about all their experiences—whether frustrating or inspiring), and learning about the context.

Fifth, facilitating learning requires nudging and pushing: We find that you often need to push participants to ask deep questions about their lessons.

  • For instance, someone may say “we tried to get Mr X to work with us, and he did not respond positively, so we learned that he does not want to work with us.”
  • We would follow up by asking, “why do you think Mr X did not respond?”
  • Often this leads to a new set of questions or observations about contexts in which work is being done (including, very importantly, the politics of engagement). In the example, for instance, the ‘why’ question raised discussion about how people engage in the government (and if the team reached out to Mr X in the right manner) and the politics of the context (the interests of Mr X and how these might be playing into his non-response).

This process facilitates learning by the teams and by my PDIA facilitators. Both the teams and our facilitators produce written documents (short, but written) about what was learned. Over time, we can keep coming back to these lessons to ensure everyone gains a better understanding of procedural, substantive, and context issues.

As a note: People often ask where we address ‘politics’ in PDIA. That requires another blog post, but hopefully you see, in the description here, the basic process of what we call Political Economy Engagement (PEE), which we prefer to Political Economy Analysis (PEA). The action steps in PDIA always involve pushes into—or engagements with-the context and yield responses that allow one to learn about politics (who stands where, who has power, how it is exercised, etc.)

Finally, we push teams to the next steps quickly, again—which is where they ‘adapt’: You will notice that the last two questions we ask are about next steps and issues to address in taking these steps. We do not let teams get bogged down by tough lessons, but push them to think about what they can do next, adapting according to the lessons they have learned; we focus on what is important and what is possible ‘next’, given what has been learned; and we try to build and maintain momentum, given the belief that capability and results emerge after accumulated learning and engagement from multiple push periods.

In conclusion, When considered as one full iteration, the blend of programmed action with check-in questions and reflection is intended to foster action learning and promote adaptation and progress in solving the nominated problems.  The combination of learning while producing results (through solving problems) is key to building new capability.

 

Screen Shot 2017-04-11 at 9.14.34 PM

Some linkages to theory, literature and management practice

  1. Why we focus on learning and engagement in this process: In keeping with complexity theory, the principle idea is that action leading to novel learning and engagement and interaction fosters emergence, which is the key to finding and fitting solutions to complex problems. Further in keeping with theory, the idea here is that any action can foster learning, and it is thus more important to get a team to act in small ways quickly than to hold them away from action until they can identify a big enough (or important enough) next step.
  2. Why we refer to ‘push periods’: The Scrum version of agile project management processes have similar time-bound iterations, called Sprints, which are described as ‘time-boxed’ efforts. We refer to ‘push-periods’ instead of sprints, partly to reflect the real challenges of doing this in governments (where CID focuses its PDIA work). Team members are pushing themselves to go beyond themselves in these exercises, and the name recognizes such.
  3. How we draw on action learning research, and our past experiments: Our approach builds on PDIA experience in places like Mozambique, Albania and South Africa, which has attempted to operationalize action learning ideas of Reg Revans (1980) and recent studies by Marquardt et al. (2009). These combined efforts identify learning as the product of programmed learning (which everyone is probably familiar with, and is often provided through organized training), questioning, and reflection (L=P+Q+R), which the PDIA process attempts to foster in the structure of each iteration (with action to foster experience, a check-in with simple questions about such experience, and an opportunity for reflection—facilitated by an external ‘coach’ figure). The questions asked in the PDIA check-in are much more abbreviated than those suggested by Revans and others, largely because experience with this work in busy governments suggests that there are major limits to the time and patience of officials, and asking more questions can be counter-productive (and lead to non-participation in the reflection process). The questions posed to teams are thus used to open opportunities for additional questions: like ‘who needed to be engaged and was not?’ or ‘why did you not do what you said you would?’ or ‘what is the main obstacle facing your team now?’ As the team progresses through iterations, they start to ask these more specified questions themselves, and come into the check-in reflection session with such questions in their own minds.

If you are interested in reading the Sri Lanka working paper, you can find the full version here

Building State Capability: Review of an important (and practical) new book

Guest blog by Duncan Green

Jetlag is a book reviewer’s best friend. In the bleary small hours in NZ and now Australia, I have been catching up on my reading. The latest was ‘Building BSC coverState Capability’, by Matt Andrews, Lant Pritchett and Michael Woolcock, which builds brilliantly on Matt’s 2013 book and the subsequent work of all 3 authors in trying to find practical ways to help reform state systems in dozens of developing countries (see the BSC website for more). Building State Capability is published by OUP, who agreed to make it available as an Open Access pdf, in part because of the good results with How Change Happens (so you all owe me….).

But jetlag was also poor preparation for the first half of this book, which after a promising start, rapidly gets bogged down in some extraordinarily dense academese. I nearly gave up during the particularly impenetrable chapter 4: sample ‘We are defining capability relative to normative objectives. This is not a reprisal of the “functionalist” approach, in which an organization’s capability would be defined relative to the function it actually served in the overall system.’ Try reading that on two hours’ sleep.

Luckily I stuck with it, because the second half of the book is an excellent (and much more accessible) manual on how to do Problem Driven Iterative Adaptation – the approach to institutional reform that lies at the heart of the BSC programme.

Part II starts with an analogy that then runs through the rest of the book. Imagine you want to go from St Louis to Los Angeles. How would you plan your journey? In modern America, it’s easy – car, map, driver and away you go. Now imagine it is 1804, no roads and the West had not even been fully explored. The task is the same (travel from East to West), but the plan would have to be totally different – parties of explorers going out seeking routes, periodic time outs to decide on the next stage, doing deals with native American leaders along the way, and constantly needing to send back for more money and equipment. Welcome to institutional reform processes in the real world. The trouble is, say the authors, too many would-be reformers are applying 2015 approaches to the 1804 world – in lieu of a map, they grab some best practice from one country and try to ‘roll it out’ in another. Not surprisingly, it seldom works – many country political systems look more like 1804 than 2015.

BSC ch 7The chapter that really got me excited was the one on the importance of problems. ‘Focussing relentlessly on solving a specific, attention-grabbing problem’ has numerous advantages over ‘best practice’, solution-driven cookie cutters:

  • Problems are often context specific and require you to pay sustained attention to real life, rather than toolkits
  • You can acknowledge the problem without pretending you have the solution – that comes through experimentation and will be different in each context
  • Exploring and winning recognition of the problem helps build the coalition of players you need to make change happen
  • Problems often become clear during a shock or critical juncture – just when windows of opportunity for change are likely to open up

The book offers great tips on how to dig into a problem and get to its most useful core – often people start off with a problem that is really just the absence of a solution (eg ‘we don’t have an anti-corruption commission’). The trick is to keep saying ‘why does this matter’ until you get to something specific that is a ‘real performance deficiency’. Then you can start to rally support for doing something about it.

The next stage is to break down the big problem into lots of small, more soluble ones. For each of these, the book recommends establishing the state of the ‘change space’ for reform, born of a set of factors they label the ‘triple A’: Authority (do the right people want things to change?), Acceptance (will those affected accept the reform?) and Ability (are the time, money and skills in place?). Where the 3 As are present, then the book recommends going for it, trying to get some quick wins to build momentum. Where they are not, then reformers face a long game to build the change space, before jumping into reform efforts.

In all this what is special is that the advice and ideas are born of actually trying to do this stuff in dozens of countries. The authorial combination of Harvard and the World Bank means governments are regularly beating a path to their door, as are students (BSC runs a popular – and free – online learning course on PDIA).

Another attractive feature is the effort to avoid this becoming some kind of kumbaya, let a hundred flowers bloom justification for people doing anything BSC searchframethey fancy. To give comfort to bosses and funders, they propose a ‘searchframe’ to replace the much-denounced logframe. This establishes a firm and rapid timetable of ‘iteration check-ins’ where progress is assessed and new ideas or tweaks to the existing ones are introduced.

Finally a chapter on ‘Managing your Authorizing Environment’ is a great effort at showing reformers how to do an internal power analysis within their organizations, and come up with an internal theory of change on how to build and maintain support for reforms.

That chapter got me thinking about the book’s relevance to INGOs. It is explicitly aimed elsewhere – at reforming state systems, but people in NGOs, who often work at a smaller scale than the big reform processes discussed in the book, could learn a lot, particularly from the chapters on problem definition and the authorizing environment. Oxfam has been going through a painful and drawn out process to integrate the work of 20 different Oxfam affiliates, known as ‘Oxfam 2020’. I wonder what would have happened if we had signed up the 3 PDIA kings to advise on how to run it?

This blog first appeared on the Oxfam blog

Active and adaptive planning versus set plans in PDIA

written by Matt Andrews

A colleague asked me two questions in response to last week’s blog on initiating PDIA:

  1. “It does not sound like you develop a thorough plan for action. Is this correct?”
  2. “How do you move from the workshop to action, and particularly to action learning?”

I will reflect on these questions in future blog posts, but today I will only address the first one.

It is probably easiest to say that we emphasize planning instead of plans when doing PDIA, where the former is about a process of engaging around a problem and the latter is simply on developing a documented step-by-step strategy.

We believe that planning allows for learning and interaction, by those who will do the work, and this is immensely useful.

We also see planning as an ongoing process, that is active and adaptive and does not happen in one moment or manifest in one document.

That is not to say we do not start with a defined planning exercise. We do.

The planning process is initiated in what we call the initial Launchpad event, which I described in the last blog posting. It occurs in about a day, where we (the in-country PDIA facilitators) work with a number of internal teams addressing festering problems in their governments.

The internal teams do the work at these events, and our PDIA folks just facilitate the process, taking the teams through aggressive sessions of constructing and deconstructing problems, identifying entry points for action, and actual action steps (as described in my prior blog and in the early sections of our new working paper on one of the teams in Sri Lanka).

This initial planning activity does generate a one page action agenda (or plan of sorts), which is intentionally short and simple. It includes the following information:

  • a description of the problem, and why the problem matters,
  • the causes of the problem,
  • what the problem might look like solved (and especially what this kind of result would look like in the time period we are working with—usually 6 months),
  • what the ‘indicative’ results might involve at the 4 month and 2 month periods (working backwards, we ask ‘where would you need to be to get to the 6 month ‘problem solved’ result?),
  • fully specified next steps (where the teams identify what they will do in the next two weeks and what they plan to do in the two weeks after that), and
  • what is assumed in terms of authorization of the next steps, acceptance of these next steps, and abilities to do the next steps (we want teams to specify their assumptions so we can track learn where they are right and wrong and adapt accordingly in future steps).

This kind of content will be familiar to those who know about our Searchframe concept. This content is pretty much the basis of the Searchframe. We don’t often have teams build the Searchframe itself, but it is a heuristic that allows us to work with some type of ‘plan’ but one that is not overly prescriptive and limiting.

This initial planning exercise is only the start of the work, however, and “you never end up where you start in PDIA.”

The exercise is the key to getting started, and its main goal is to empower a quick progression to action. Thus, the most important thing is the listing of key action steps to take next, and a date to come back and ask about how those action steps went, what was learned, and what will be next.

A ‘check in’ event data is usually set about two or three weeks in the future, and we inform the teams that they will be involved in a ‘push period’ between events. (This is similar to a ‘sprint’ in agile processes, but we think the idea of ‘pushing yourself and your organization’ is more apt in the governments we work with, so we call it a ‘push period’). The two to three week push period length provides enough time for teams to act on their next steps but also creates a time boundary necessary to promote urgency and build momentum.

The team meets in-between the events, and works to take the steps they ‘planned’. Then they meet at the check in and answer a series of questions: What did you do? What did you learn? What are you struggling with? What is next?

The answers to these questions provide the basis of learning and adaptation, and allow the teams to adjust their assumptions, next steps, and even (sometimes) expectations. They do this iteratively every few weeks, often finding that their adaptations become smaller over time (as they learn more and engage more, they become more sure of what they are doing and more clear about how to do it).

As such, the initial planning exercise is not the main event, and the initial ‘plan’ is not the main document—or even the final document. The event is just the start of an iterative action-infused planning process, where a loose plan is constantly being adapted until one knows where to go.

From this description, hopefully it is clear that we do foster planning and even a plan, but in ways that are quite different to common approaches to doing development:

  • The work is done by the internal teams, not external partners (consultants or people in donor agencies). We as the external PDIA facilitators may nudge thinking in some directions during the process, but the work is not ours. This is because ownership is a real thing that comes by owners doing the thing they must own, and when outsiders take the work from the owners it undermines ownership.
  • The initial planning exercise and one page plan is not perfect, and is often not as infused with evidence and data as many development practitioners hope it to be. I am an academic and I believe in data and evidence, but in the PDIA process we find that government teams often do not have the data or evidence at hand to do rapid planning, or they do not have the capability to use such. Waiting on data to develop a perfect plan slows the process of progressing to action and getting to the learning by doing. We have thus learned that it is better to not create a huge ‘evidence hurdle’ at the initial stage. We find that, where evidence and data matters, the teams often steer themselves to action around data sourcing, analysis etc. beyond planning. This allows them to find and analyze evidence as part of the process, and build lessons from such into their process.

[A note here is important, given that some recent commentaries have placed ‘using evidence’ and even using ‘big data’ as central to the PDIA process and Doing Development Differently. I love data, and think that there is a huge role for using evidence and big data in development, but I do not see how it is a central part of PDIA. Hardcore evidence based policymaking and big data analysis tends to depend on narrow groups of highly skilled insiders (or more commonly outsiders), the availability of significant amounts of data, and a fairly long process of analysis that yields—most commonly—a document as a result. These are not the hallmarks of effective PDIA and I would caution against making ‘access to evidence’ or ‘using Big Data’ as key signs that one is doing PDIA].

I hope this post has been useful in helping explain our thoughts on plans versus planning. Next week I will describe how we do the iterations in a bit more detail. Remember to read our book on these topics (it is FREE through open access) and read the new working paper on PDIA in action in one of the teams in Sri Lanka. We will produce more of these active narratives soon.

 

Initiating PDIA: Start by running…and then run some more

written by Matt Andrews

“Once there is interest, how do you start a PDIA project?”

Many people have asked me this question. They are often in consulting firms or donor agencies thinking about working on PDIA with host governments, or in some central bureau in the government itself.

“We have an authorizer, know the itch that needs scratching (the problem), and have a team convened to address it,” they say. “But we don’t know what to do to get the work off the ground.”

I ask what they would think of doing, and they typically provide one of the following answers:

“We should do research on the problem (the itch)” or “We should hold a multi-day workshop where people get to analyze the problem and really used to a problem driven approach.”

I have tried starting PDIA with both strategies. Neither is effective in getting the process going.

  • When outsiders (donors, academics, or even central agencies responsible for making but not implementing policy) do the primary research on ‘the problem’, their product is usually a report that sits on shelves. If you start with such a product it is hard to reorient people to change their learned behavior and actually use the report.
  • When you hold an elaborate workshop, using design thinking, fancy analysis, or the like, it is very easy to get stuck in performance—or in a fun and exciting new activity. We find people in governments do attend such events and have fun in them, but often get lost in the discussion or analysis and stay stuck in that place.

Having tried these and other strategies to initiate PDIA interventions, we at Harvard BSC have learned (by doing, reflection, and trying again…) some basic principles about what does not work in getting started, and what does work. Here are a few of these findings:

  • It does not work when outsiders analyze the problem on behalf of those who will act to solve it. It works when those in the insider PDIA teams construct and deconstruct the problem (whether they do this ‘right’ or ‘wrong’). The insiders must own the process, and the outsiders must ‘give the work back’ to the rightful owners.
  • It does not work to stage long introductory workshops to launch PDIA processes, as participants either get frustrated with the time away from work or distracted by the workshop itself. Either way they get stuck and the workshop does not mobilize their action. It works if you convene teams for short ‘launchpad-type events’ where they engage rapidly and move as rapidly to action (beyond talk). We are always anxious to move internal PDIA teams to action. The meetings are simply staging events: they are not what ‘doing PDIA’ is actually about.

Acting on these principles, we now always start PDIA running.

We bring internal teams together, and in a day (or at most a day and a half) we ‘launch’ through a series of sessions that (i) introduce them to the PDIA method, (ii) have them construct and (iii) deconstruct their problems, (iv) identify entry points for action, and (v) specify three or more initial practical steps they can take to start addressing these entry points. At the end of the session they go away with their problem analysis and their next step action commitments, as well as a date when they will again meet a facilitator to discuss their action, and learn by reflection.

This is a lot to get done in a short period. This is intentional, as we are trying to model upfront the importance of acting quickly to create the basis of progress and learning. We use time limits on every activity to establish this kind of pressure, and push all team members to ‘do something’, then ‘stop and reflect’, and then do the next thing.

When we get to the end of each Launchpad event, the internal teams have their own ‘next step’ strategies, and a clear view that the PDIA process has now started: they are already running, and acting, and engaging in a new and difficult space. And they know what they need to do next, and what date in the near future they will account for their progress, be asked about their learning, and pushed to identify more ‘next steps’.

When I tell interested parties in donor agencies, consulting firms, etc. about our ‘start by running’ approach, they have a number of common responses:

“It does not sound like anyone is doing a proper diagnosis of the problem: what happens if the team gets it wrong?”

“What happens if the team identifies next steps that make no sense?”

“This strategy could be a disaster if you have the wrong people in the room—who don’t know what they are doing or who have a biased view on what they are doing…”

These concerns are real, but really don’t matter much in the PDIA process:

  • We don’t believe that initial problem diagnostics are commonly correct when one starts a program (no matter how smart the researchers doing the analysis).
  • We also don’t believe that you commonly identify the right ‘next steps’ from a study or a discussion.
  • And we also don’t believe that these kinds of processes are ever unbiased, or that you commonly get the right people in the room at the start of a process.

We don’t believe you address these concerns by doing great up front research. Rather, we aim to get the teams into action as quickly as possible, where the action creates opportunity for reflection, and reflection informs constant experiential learning—about the problem, past and next steps, and who should be involved in the process. This learning resides in the actors involved in the doing, and prompts their adaptation. Which leads to greater capability and constant improvement in how they see the problem, think of potential solutions, and engage others to make these solutions happen.

A final note:

When I discussed this strategy with a friend charged with ‘doing PDIA’ as part of a contract with a well-known bilateral donor, he lamented: “You are telling me the workshop is but a launching event for the real PDIA process of acting, reflecting, learning and adapting….but I was hired to do a workshop as if it was DOING PDIA. No one spoke of getting into action after the workshop.”

To this colleague—and the donors that hired him—I say simply, “PDIA is about getting people involved, and acting, and you always need to get to action fast. PDIA must start by running, and must keep teams running afterwards. Anything that happens one-off, or that promotes slow progress and limited repeated engagement is simply not PDIA.”

Learn more about initiating PDIA in practice in chapters 7 and 9 of our free book, Building State Capability: Evidence, Analysis, Action.