Active and adaptive planning versus set plans in PDIA

written by Matt Andrews

A colleague asked me two questions in response to last week’s blog on initiating PDIA:

  1. “It does not sound like you develop a thorough plan for action. Is this correct?”
  2. “How do you move from the workshop to action, and particularly to action learning?”

I will reflect on these questions in future blog posts, but today I will only address the first one.

It is probably easiest to say that we emphasize planning instead of plans when doing PDIA, where the former is about a process of engaging around a problem and the latter is simply on developing a documented step-by-step strategy.

We believe that planning allows for learning and interaction, by those who will do the work, and this is immensely useful.

We also see planning as an ongoing process, that is active and adaptive and does not happen in one moment or manifest in one document.

That is not to say we do not start with a defined planning exercise. We do.

The planning process is initiated in what we call the initial Launchpad event, which I described in the last blog posting. It occurs in about a day, where we (the in-country PDIA facilitators) work with a number of internal teams addressing festering problems in their governments.

The internal teams do the work at these events, and our PDIA folks just facilitate the process, taking the teams through aggressive sessions of constructing and deconstructing problems, identifying entry points for action, and actual action steps (as described in my prior blog and in the early sections of our new working paper on one of the teams in Sri Lanka).

This initial planning activity does generate a one page action agenda (or plan of sorts), which is intentionally short and simple. It includes the following information:

  • a description of the problem, and why the problem matters,
  • the causes of the problem,
  • what the problem might look like solved (and especially what this kind of result would look like in the time period we are working with—usually 6 months),
  • what the ‘indicative’ results might involve at the 4 month and 2 month periods (working backwards, we ask ‘where would you need to be to get to the 6 month ‘problem solved’ result?),
  • fully specified next steps (where the teams identify what they will do in the next two weeks and what they plan to do in the two weeks after that), and
  • what is assumed in terms of authorization of the next steps, acceptance of these next steps, and abilities to do the next steps (we want teams to specify their assumptions so we can track learn where they are right and wrong and adapt accordingly in future steps).

This kind of content will be familiar to those who know about our Searchframe concept. This content is pretty much the basis of the Searchframe. We don’t often have teams build the Searchframe itself, but it is a heuristic that allows us to work with some type of ‘plan’ but one that is not overly prescriptive and limiting.

This initial planning exercise is only the start of the work, however, and “you never end up where you start in PDIA.”

The exercise is the key to getting started, and its main goal is to empower a quick progression to action. Thus, the most important thing is the listing of key action steps to take next, and a date to come back and ask about how those action steps went, what was learned, and what will be next.

A ‘check in’ event data is usually set about two or three weeks in the future, and we inform the teams that they will be involved in a ‘push period’ between events. (This is similar to a ‘sprint’ in agile processes, but we think the idea of ‘pushing yourself and your organization’ is more apt in the governments we work with, so we call it a ‘push period’). The two to three week push period length provides enough time for teams to act on their next steps but also creates a time boundary necessary to promote urgency and build momentum.

The team meets in-between the events, and works to take the steps they ‘planned’. Then they meet at the check in and answer a series of questions: What did you do? What did you learn? What are you struggling with? What is next?

The answers to these questions provide the basis of learning and adaptation, and allow the teams to adjust their assumptions, next steps, and even (sometimes) expectations. They do this iteratively every few weeks, often finding that their adaptations become smaller over time (as they learn more and engage more, they become more sure of what they are doing and more clear about how to do it).

As such, the initial planning exercise is not the main event, and the initial ‘plan’ is not the main document—or even the final document. The event is just the start of an iterative action-infused planning process, where a loose plan is constantly being adapted until one knows where to go.

From this description, hopefully it is clear that we do foster planning and even a plan, but in ways that are quite different to common approaches to doing development:

  • The work is done by the internal teams, not external partners (consultants or people in donor agencies). We as the external PDIA facilitators may nudge thinking in some directions during the process, but the work is not ours. This is because ownership is a real thing that comes by owners doing the thing they must own, and when outsiders take the work from the owners it undermines ownership.
  • The initial planning exercise and one page plan is not perfect, and is often not as infused with evidence and data as many development practitioners hope it to be. I am an academic and I believe in data and evidence, but in the PDIA process we find that government teams often do not have the data or evidence at hand to do rapid planning, or they do not have the capability to use such. Waiting on data to develop a perfect plan slows the process of progressing to action and getting to the learning by doing. We have thus learned that it is better to not create a huge ‘evidence hurdle’ at the initial stage. We find that, where evidence and data matters, the teams often steer themselves to action around data sourcing, analysis etc. beyond planning. This allows them to find and analyze evidence as part of the process, and build lessons from such into their process.

[A note here is important, given that some recent commentaries have placed ‘using evidence’ and even using ‘big data’ as central to the PDIA process and Doing Development Differently. I love data, and think that there is a huge role for using evidence and big data in development, but I do not see how it is a central part of PDIA. Hardcore evidence based policymaking and big data analysis tends to depend on narrow groups of highly skilled insiders (or more commonly outsiders), the availability of significant amounts of data, and a fairly long process of analysis that yields—most commonly—a document as a result. These are not the hallmarks of effective PDIA and I would caution against making ‘access to evidence’ or ‘using Big Data’ as key signs that one is doing PDIA].

I hope this post has been useful in helping explain our thoughts on plans versus planning. Next week I will describe how we do the iterations in a bit more detail. Remember to read our book on these topics (it is FREE through open access) and read the new working paper on PDIA in action in one of the teams in Sri Lanka. We will produce more of these active narratives soon.

 

Initiating PDIA: Start by running…and then run some more

written by Matt Andrews

“Once there is interest, how do you start a PDIA project?”

Many people have asked me this question. They are often in consulting firms or donor agencies thinking about working on PDIA with host governments, or in some central bureau in the government itself.

“We have an authorizer, know the itch that needs scratching (the problem), and have a team convened to address it,” they say. “But we don’t know what to do to get the work off the ground.”

I ask what they would think of doing, and they typically provide one of the following answers:

“We should do research on the problem (the itch)” or “We should hold a multi-day workshop where people get to analyze the problem and really used to a problem driven approach.”

I have tried starting PDIA with both strategies. Neither is effective in getting the process going.

  • When outsiders (donors, academics, or even central agencies responsible for making but not implementing policy) do the primary research on ‘the problem’, their product is usually a report that sits on shelves. If you start with such a product it is hard to reorient people to change their learned behavior and actually use the report.
  • When you hold an elaborate workshop, using design thinking, fancy analysis, or the like, it is very easy to get stuck in performance—or in a fun and exciting new activity. We find people in governments do attend such events and have fun in them, but often get lost in the discussion or analysis and stay stuck in that place.

Having tried these and other strategies to initiate PDIA interventions, we at Harvard BSC have learned (by doing, reflection, and trying again…) some basic principles about what does not work in getting started, and what does work. Here are a few of these findings:

  • It does not work when outsiders analyze the problem on behalf of those who will act to solve it. It works when those in the insider PDIA teams construct and deconstruct the problem (whether they do this ‘right’ or ‘wrong’). The insiders must own the process, and the outsiders must ‘give the work back’ to the rightful owners.
  • It does not work to stage long introductory workshops to launch PDIA processes, as participants either get frustrated with the time away from work or distracted by the workshop itself. Either way they get stuck and the workshop does not mobilize their action. It works if you convene teams for short ‘launchpad-type events’ where they engage rapidly and move as rapidly to action (beyond talk). We are always anxious to move internal PDIA teams to action. The meetings are simply staging events: they are not what ‘doing PDIA’ is actually about.

Acting on these principles, we now always start PDIA running.

We bring internal teams together, and in a day (or at most a day and a half) we ‘launch’ through a series of sessions that (i) introduce them to the PDIA method, (ii) have them construct and (iii) deconstruct their problems, (iv) identify entry points for action, and (v) specify three or more initial practical steps they can take to start addressing these entry points. At the end of the session they go away with their problem analysis and their next step action commitments, as well as a date when they will again meet a facilitator to discuss their action, and learn by reflection.

This is a lot to get done in a short period. This is intentional, as we are trying to model upfront the importance of acting quickly to create the basis of progress and learning. We use time limits on every activity to establish this kind of pressure, and push all team members to ‘do something’, then ‘stop and reflect’, and then do the next thing.

When we get to the end of each Launchpad event, the internal teams have their own ‘next step’ strategies, and a clear view that the PDIA process has now started: they are already running, and acting, and engaging in a new and difficult space. And they know what they need to do next, and what date in the near future they will account for their progress, be asked about their learning, and pushed to identify more ‘next steps’.

When I tell interested parties in donor agencies, consulting firms, etc. about our ‘start by running’ approach, they have a number of common responses:

“It does not sound like anyone is doing a proper diagnosis of the problem: what happens if the team gets it wrong?”

“What happens if the team identifies next steps that make no sense?”

“This strategy could be a disaster if you have the wrong people in the room—who don’t know what they are doing or who have a biased view on what they are doing…”

These concerns are real, but really don’t matter much in the PDIA process:

  • We don’t believe that initial problem diagnostics are commonly correct when one starts a program (no matter how smart the researchers doing the analysis).
  • We also don’t believe that you commonly identify the right ‘next steps’ from a study or a discussion.
  • And we also don’t believe that these kinds of processes are ever unbiased, or that you commonly get the right people in the room at the start of a process.

We don’t believe you address these concerns by doing great up front research. Rather, we aim to get the teams into action as quickly as possible, where the action creates opportunity for reflection, and reflection informs constant experiential learning—about the problem, past and next steps, and who should be involved in the process. This learning resides in the actors involved in the doing, and prompts their adaptation. Which leads to greater capability and constant improvement in how they see the problem, think of potential solutions, and engage others to make these solutions happen.

A final note:

When I discussed this strategy with a friend charged with ‘doing PDIA’ as part of a contract with a well-known bilateral donor, he lamented: “You are telling me the workshop is but a launching event for the real PDIA process of acting, reflecting, learning and adapting….but I was hired to do a workshop as if it was DOING PDIA. No one spoke of getting into action after the workshop.”

To this colleague—and the donors that hired him—I say simply, “PDIA is about getting people involved, and acting, and you always need to get to action fast. PDIA must start by running, and must keep teams running afterwards. Anything that happens one-off, or that promotes slow progress and limited repeated engagement is simply not PDIA.”

Learn more about initiating PDIA in practice in chapters 7 and 9 of our free book, Building State Capability: Evidence, Analysis, Action.

PDIA and Authorizers with an itch

written by Matt Andrews

“How do you decide where to work on a PDIA project?”  This is probably the most common question I have been asked with respect to PDIA.

After over 5 years of doing this work in a variety of countries and sectors, I have a simple answer: “When we find authorizers with an itch.”

“That sounds bizarre,” I hear you say. Or maybe you think I’m just being cute to fit in with a playful blogging technique.

No, authorizers with an itch are key to starting any PDIA initiative.

When I say we need counterparts with an itch, I mean that they are very aware of a problem they can’t solve. Like an itch you can’t scratch, or that you scratch again and again but to no avail. This is usually a policy problem that has come to the surface one too many times, usually where various prior reforms or policies or interventions have not provided effective solutions.

Stubborn itches create frustration and even desperation, which can create the space for doing things differently—and taking risks. PDIA needs this kind of space, and this motivating influence. Without it, we have found very little room to focus on the problem, and learn-by-doing towards a new solution.

There are downsides of working to scratch a stubborn itch. The fact that others have tried scratching it, to no avail, means that it is usually going to be ‘wicked hard’ to solve (so don’t expect an easy path to a solution). The fact that it seems to move around (sometimes itching here and sometimes there) reflects the many unseen and even dynamic factors that cause the itch itself (like nasty politics or bureaucratic dysfunction). Don’t expect these factors to go away just because you are tackling the problem with PDIA. You will hit the nastiness soon. Be ready.

When I say we need ‘authorizers’ to start, it is because the PDIA work we do is always in the public domain, where no real work (with action attached) is done without someone’s explicit authorization. The required authorizer is always, in my experience, someone inside the context undergoing change. This means the work cannot be ordered or organized or identified by an external agent (donor, consultant, or even academic).

My team at Harvard found this out the hard way. As you will read in a forthcoming working paper by Stuart Russell, Peter Harrington and I, we have experimented with PDIA initiatives where problems are identified in different ways.  We have had limited success whenever anyone from Harvard or an external entity (like a donor) has been a main identifier of the problem. In contrast, we have almost always had some success when the problem was identified by a domestic authorizer in the place undergoing change.

This is simply because the internal authorizer needs to have internal authority: at the least, to convene a group of internal people to start engaging with the problem, and beyond this to protect the PDIA process from threats and distractions. No external party can do this.

Beyond convening authority, we find that the authorizers need to provide three types of authorization: shareable authorization (where they allow the engagement of other authorizers in the process of scratching the itch), flexible authorization (which allows for an experimental process), and patient (or grit) authorization (where one can expect some continued support in the search for an effective ‘scratch’ solution).

These are big authorization needs, and one does not know if they will be met at the start of the PDIA process. But they tend to come when authorizers face an itch (making them willing to share, adaptive in demands, and patient for a real solution).

We find, therefore, that there is enough space to initiate a PDIA initiative if we find an authorizer with an itch she cannot scratch.  That’s where we start our work, buckling our seat belts and getting ready for a journey of, and to the unexpected.

Are you in a situation where an authorizer is facing a stubborn itch? Maybe you have space to ask, “What’s the problem…and can we mobilize a team to try something different to solve it?”

Learn more about engaging authorizers around problems that matter in chapters 6 and 9 of our free book, Building State Capability: Evidence, Analysis, Action.

 

PDIA Notes 2: Learning to Learn

written by Peter Harrington

After over two years of working with the government of Albania, and as we embark on a new project to work with the government of Sri Lanka, we at the Building State Capability program (BSC) have been learning a lot about doing PDIA in practice.

Lessons have been big and small, practical and theoretical – an emerging body of observations and experience that is constantly informing our work. Amongst other things, we are finding that teams are proving an effective vehicle for tackling problems. We have found that a lot of structure and regular, tight loops of iteration are helping teams reflect and learn. We have found that it is vital to engage with several levels of the bureaucracy at the same time to ensure a stable authorising space for new things to happen. This all amounts to a sort of ‘thick engagement’, where little-and-often type interaction, woven in at many levels, bears more fruit than big set-piece interventions.

Each of these lessons are deserving of deeper exploration in their own right, and we will do so in subsequent posts. For now, I want to draw out some reflections about the real goal of our work, and our theory of change.

In the capacity-building arena, the latest wisdom holds that the best learning comes from doing. We think this is right. Capacity building models that rely purely on workshop or classroom settings and interactions are less effective in creating new know-how than interventions that work alongside officials on real projects, allowing them to learn by working on the job. Many organisations working in the development space now explicitly incorporate this into their methodology, and in so doing promise to ensure delivery of something important alongside the capacity building (think of external organizations that offer assistance in delivery, often by placing advisers into government departments, and promise to ensure a certain goal is achieved and the government capacity to deliver is also enhanced).

It sounds like a win-win (building capabilities while achieving delivery). The problem is that, in practice, when the implementers in the governments inevitably wobble, or get distracted, or pulled off the project by an unsupportive boss (or whatever happens to undermine the process, as has probably happened many times before), the external advisors end up intervening to get the thing done, because that’s what was promised, what the funder often cares more about, and what is measurable.

When that happens, the learning stops. And the idea of learning by doing stops, because the rescue work by external actors signalled that learning by doing—and failing, at least partially, in the process—was at best a secondary objective (and maybe not even a serious one). Think about anything you have ever learned in your life – whether at school or as an adult. If you knew someone was standing by to catch a dropped ball, or in practice was doing most of the legwork, would you have really learned anything? For the institutions where we work, although the deliverable may have been delivered, when the engagement expires, nothing will have changed in the way the institution works in the long run. This applies equally, by the way, to any institution or learning process, anywhere in the world.

The riddle here is this: what really makes things change and improve in an institution, such that delivery is enhanced and capability to deliver is strengthened? The answer is complex, but it boils down to people in the context doing things differently – being empowered to find out what different is and actually pursue it themselves.

In pursuing this answer, we regularly deploy the concept of ‘positive deviance’ in our work: successful behaviors or strategies enabling people to find better solutions to a problem than their peers, despite facing similar challenges and having no extra resources or knowledge than their peers. Such people are everywhere, sometimes succeeding, and depending on the conditions sometimes failing, to change the way things work – either through their own force of will, or by modelling something different. Methods to find and empower positive deviants within a community have existed for many years. But what if, by cultivating a habit of self-directed work and problem solving, it was possible to not just discover positive deviants but create new ones?

Doing things differently stems from thinking differently, and you only think differently when you learn – it’s more or less the definition of learning. Looked at this way, learning becomes the sine qua non of institutional change. It may not be sufficient on its own – structures, systems and processes still matter – but without a change in paradigm among a critical mass of deviants, those other things (which are the stuff of more traditional interventions) will always teeter on the brink of isomorphism.

We believe that positive deviance comes from learning, especially learning in a self-directed way, and learning about things that matter to the people doing them. If you can catalyse this kind of learning in individuals, you create a different kind of agency for change. If you can go beyond this and catalyse this kind of learnings in groups of individuals within an institution or set of institutions, and create a sufficiently strong holding space for their positive deviance to fertilise and affect others, then gradually whole systems can change. In fact, I’d be surprised if there’s any other way that it happens. As Margaret Mead put it, “Never doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.”

This is our theory of change. The methods we use – particularly the structured 6-month intensive learning and action workshop we call Launchpad – are trying above all to accelerate this learning by creating a safe space in which to experiment, teach ideas and methods that disrupt the status quo, and create new team dynamics and work habits among bureaucrats. By working with senior and political leaders at the same time, we are trying to foster different management habits, to help prevent positive deviance being stamped out. In doing all this, the goal is to cultivate individuals, teams, departments and ultimately institutions that have a habit of learning – which is what equips them to adapt and solve their own problems.

This does not mean that the model is necessarily better at achieving project delivery than other methods out there, although so far it has been effective at that too. The difference is that we are willing to let individuals or even teams fail to deliver, because it is critical for the learning, and without learning there is no change in the long term. Doing this is sometimes frustrating and costly, and certainly requires us gritting our teeth and not intervening, but what we see so often is agents and groups of agents working their way out of tricky situations with better ideas and performance than when they went in. They are more empowered and capable to provide the agency needed for their countries’ development. This is the goal, and it can be achieved.

 

 

PDIA: It doesn’t matter what you call it, it matters that you do it

written by Matt Andrews

It is nearly two years since we at the Building State Capability (BSC) program combined with various other groups across the developing world to create an umbrella movement called Doing Development Differently (DDD). The new acronym was meant to provide a convening body for all those entities and people trying to use new methods to achieve development’s goals. We were part of this group with our own approach, which many know as Problem Driven Iterative Adaptation (PDIA). 

Interestingly, a few of the DDD folks thought we should change this acronym and call PDIA something fresher, cooler, and more interesting; it was too clunky, they said, to ever really catch on, and needed to be called something like ‘agile’ or ‘lean’ (to borrow from approaches we see as influential cousins in the private domain).

The DDD movement has grown quite a bit in the last few years, with many donor organizations embracing the acronym in its work, and some even advocating for doing PDIA in their projects and interventions. A number of aid organizations and development consultancies have developed other, fresher terms to represent their approaches to DDD as well; the most common word we see now is ‘adaptive’, with various organizations creating ‘adaptive’ units or drawing up processes for doing ‘adaptive’ work.

‘Adaptive programming’ certainly rolls off the tongue easier than ‘Problem Driven Iterative Adaptation’!

Some have asked me why we don’t change our approach to call it Adaptive as well, others have asked where we have been while all the discussions about names and titles and acronyms have been going on, and while organizations in the aid world have been developing proposals for adaptive projects and the like (some of which are now turned into large tenders for consulting opportunities).  My answer is simple: I’ve made peace with the fact that we are much more interested in trying to work out how to do this work in the places it is needed the most (in implementing entities within governments that struggle to get stuff done). 

So, we have been working out how to do our PDIA work (where the acronym really reflects what we believe—that complex issues in development can only be addressed through problem driven, iterative, and adaptive processes). Our observation, from taking an action research approach to over twenty policy and reform engagements, a light-touch teaching intervention with over 40 government officials, an online teaching program, and more, is clear: the people we work with (and who actually do the work) in governments don’t really care for the catchy name or acronym, or if PDIA is clunky or novel or old and mainstream. The people we are working with are simply interested in finding help: to empower their organizations by building real capability through the successful achievement of results.

We thus seldom even talk about PDIA, or adaptive programming, or DDD, or agile or lean, or whatever else we talk about in our classrooms and seminar venues back at Harvard (and in many of our blog posts and tweets). Indeed, we find that a key dimension of this work—that makes it work—is not being flashy and cool and cutting edge. It’s about being real and applied, and organic, and relational. And actually quite nodescript and mundane; even boringly focused on helping people do the everyday things that have eluded them.

So, PDIA’s lack of ‘flash’ and ‘coolness’ may be its greatest asset (and one we never knew about), because it does not matter what PDIA is called…what matters is whether it is being done.

PDIA Notes 1: How we have PDIA’d PDIA in the last five years

written by Matt Andrews, co-Founding Faculty of the Building State Capability Program

We at the Building State Capability (BSC) program have been working on PDIA experiments for five years now. These experiments have been designed to help us learn how to facilitate problem driven, iterative and adaptive work. We have learned a lot from them, and will be sharing our lessons—some happy, some frustrating, some still so nuanced and ambiguous that we need more learning, and some clear— through a series of blog posts.

Before we share, however, I wanted to clarify some basic information about who we are and what we do, and especially what our work involves. Let me do this by describing what our experiments look like, starting with listing the characteristics that each experiment shares:

  • We have used the PDIA principles in all cases (engaging local authorizers to nominate their own problems for attention, and their own teams, and then working on solving the problems through tight iterations and with lots of feedback).
  • We work with and through teams of individuals who reside in the context and who are responsible for addressing the problems being targeted. These people are the ones who do the hard work, and who do the learning, and who get the credit for whatever comes out of the process.
  • We work with government teams only, given our focus on building capable states. (We do not believe that one can always replace failed or failing administrative and political bodies with private or non profit contractors or operators. Rather, one should address the cause of failure and build capability where it does not exist).
  • We believe in building capability through experiential learning and the failure and success such brings (choosing to institutionalize solutions only after lessons have been learned about what works and why, instead of institutionalizing solutions that imply ex ante knowledge of what works in places where such knowledge does not exist).
  • We work with real problems and focus on real results (defined as ‘problem solved’, not ‘solution introduced’) in order focus the work and motivate the process (to authorizers and to teams involved in doing the work).
  • We—the BSC team affiliated with Harvard—see ourselves as external facilitators of a process, and do not do the substantive work of delivery—even if the results look like they won’t come. Our primary focus is on fostering learning and coaching teams to do things differently and more effectively; we have seen too many external consultants rescuing a delivery failure once and undermining local ownership of the process and the emphasis on building local capability to succeed.

This set of principles has underpinned our experimental work in a variety of countries and sectors, where governments have been struggling to get things done. We have worked in places like Mozambique, South Africa, Liberia, Albania, Jamaica, Oman, and now Sri Lanka. We have worked with teams focused on justice reform, health reform, agriculture policy, industrial policy, export promotion, investor engagement, low-income housing, tourism promotion, municipal management, oil and electricity sector issues, and much more.

These engagements have taken different shapes—as we vary approaches to learn internally about how to do this kind of work most effectively, and how to adapt mechanisms to different contexts and opportunities:

  • In some instances, we have been the direct conveners of teams of individuals, whereas we have relied on authorizers in countries to act as conveners in other contexts, and in some interactions we have worked with individuals only—and relied on these individuals to act as conveners in their own contexts.
  • Some of our work has involved extremely regular and tangible interaction from our side—with our facilitators engaging at least every two or three weeks with teams—and other work has seen a much less regular, or a more light touch interaction (not meeting every two weeks, or engaging only be phone every two weeks, or structuring interactions between peers involved in the work rather than having ourselves as the touch point).
  • We have used classroom structures in some engagements, where teams are convened in a neutral space and work as if in a classroom setting for key points of the process (the initial framing of the work and meetings at major milestones every six weeks or so), but in other contexts we work strictly in the environments of the teams, and in a more ‘workplace-driven’ structure. In other instances, we have relied almost completely on remote correspondence (through online course engagements, for instance).

There are other variations in the experiments, all intended to help us learn from experience about what works and why. The experiments have yielded many lessons, and humbled us as well: Some of these experiments have become multi-year interactions where we see people being empowered to do things differently, but others have not even gotten out of the starting blocks, for instance. Both experiences humble us for different reasons.

This work is truly the most exciting and time consuming thing I have ever done, but is also—I feel deeply—the most important work I could be doing in development. It has made my sense of what we need in development clearer and clearer. I hope you also benefit in this was as we share our experiences in coming blog posts.

 

The “PDIA: Notes from the Real World” blog series

written by Salimah Samji

We are delighted to announce our new PDIA: Notes from the Real World blog series. In this series we will share our lessons from our PDIA experiments over the past five years, on how to facilitate problem driven, iterative and adaptive work . We will also feature some guest blog posts from others who are experimenting and learning from PDIA. We hope you will join us on this learning adventure!

Read the first blog post written by Matt Andrews here.