Examining public administration and budget in Peru

Guest blog by Nohelia Navarrete Flores

This is a blog series written by the alumni of the Implementing Public Policy Executive Education Program at the Harvard Kennedy School. Participants successfully completed this 6-month online learning course in December 2020. These are their learning journey stories.

There is a phrase that reads “A chain is only as strong as its weakest link”. After 10 years of managing Public Health projects, I had realized that it was more than just a phrase; it was a fact, and I started reflecting on how to make that weak link stronger.
I first thought of joining the Implementing Public Policy course to complement what I had learnt in Leading Economic Growth. I was looking forward to experiencing a longer and adaptive process to help me developing the policy challenge I had identified in the previous course.

Continue reading Examining public administration and budget in Peru

Initiating action: The action-learning in PDIA

written by Matt Andrews

I recently wrote a blog in response to a question I was asked by a colleague about how we move from the foundation or framing workshop in PDIA processes—where problems are constructed and deconstructed—into action, and beyond that, action learning. In this post I will offer some ideas on how we do that.

First, we push teams to action quickly: We ensure that the teams working in the framing workshops can identify clear next steps to act upon in the days that follow the workshop. These next steps need to be clear and actionable, and teams needs to know that action is expected.

Second, we don’t aim for ‘perfect next steps’—just action to foster learning and engagement: The steps team identify to start with often seem small and mundane, but our experience indicates that small and mundane steps are the way in which big and surprising products emerge. This is especially the case when each ‘next step’ yields learning (with new information, and experiential lessons) and expands engagement (with new agents, ideas, and more). This is because the problems being addressed are either complicated or complex, and are addressed by expanding engagement and reach (which fosters coordination needed to confront complicated problems, and interaction vital to tame complexity) and leads to learning (which is crucial in the face of the uncertainty and unknowns that typify complex problems).

Third, we create time-bound ‘push periods’ for the next step action assignments:  After the framing workshop, the PDIA process involves a set of action iterations where teams go away and take the ‘next step’ actions they identify, agreeing to meet again at a set date and time to ‘check-in’ on progress. Each iteration is called a ‘push period’ in which team members push themselves and others to take-action and make progress they otherwise would not.

Fourth, we convene teams for ‘check-ins’ after their push periods, and ask questions: The team reassembles after the push period, with PDIA facilitators, at the ‘check-in’ date—and reflects on four questions: ‘What was done? What was learned? What is next? What are your concerns?’ Note that the questions start by probing basic facts of action (partly to emphasize accountability for action, and also to start the reflection period off with a simple report—a basic discussion to precede deeper reflection, which often needs some context). We then ask about ‘what was learned’, where we focus on procedural and substantive lessons (about all their experiences—whether frustrating or inspiring), and learning about the context.

Fifth, facilitating learning requires nudging and pushing: We find that you often need to push participants to ask deep questions about their lessons.

  • For instance, someone may say “we tried to get Mr X to work with us, and he did not respond positively, so we learned that he does not want to work with us.”
  • We would follow up by asking, “why do you think Mr X did not respond?”
  • Often this leads to a new set of questions or observations about contexts in which work is being done (including, very importantly, the politics of engagement). In the example, for instance, the ‘why’ question raised discussion about how people engage in the government (and if the team reached out to Mr X in the right manner) and the politics of the context (the interests of Mr X and how these might be playing into his non-response).

This process facilitates learning by the teams and by my PDIA facilitators. Both the teams and our facilitators produce written documents (short, but written) about what was learned. Over time, we can keep coming back to these lessons to ensure everyone gains a better understanding of procedural, substantive, and context issues.

As a note: People often ask where we address ‘politics’ in PDIA. That requires another blog post, but hopefully you see, in the description here, the basic process of what we call Political Economy Engagement (PEE), which we prefer to Political Economy Analysis (PEA). The action steps in PDIA always involve pushes into—or engagements with-the context and yield responses that allow one to learn about politics (who stands where, who has power, how it is exercised, etc.)

Finally, we push teams to the next steps quickly, again—which is where they ‘adapt’: You will notice that the last two questions we ask are about next steps and issues to address in taking these steps. We do not let teams get bogged down by tough lessons, but push them to think about what they can do next, adapting according to the lessons they have learned; we focus on what is important and what is possible ‘next’, given what has been learned; and we try to build and maintain momentum, given the belief that capability and results emerge after accumulated learning and engagement from multiple push periods.

In conclusion, When considered as one full iteration, the blend of programmed action with check-in questions and reflection is intended to foster action learning and promote adaptation and progress in solving the nominated problems.  The combination of learning while producing results (through solving problems) is key to building new capability.

 

Screen Shot 2017-04-11 at 9.14.34 PM

Some linkages to theory, literature and management practice

  1. Why we focus on learning and engagement in this process: In keeping with complexity theory, the principle idea is that action leading to novel learning and engagement and interaction fosters emergence, which is the key to finding and fitting solutions to complex problems. Further in keeping with theory, the idea here is that any action can foster learning, and it is thus more important to get a team to act in small ways quickly than to hold them away from action until they can identify a big enough (or important enough) next step.
  2. Why we refer to ‘push periods’: The Scrum version of agile project management processes have similar time-bound iterations, called Sprints, which are described as ‘time-boxed’ efforts. We refer to ‘push-periods’ instead of sprints, partly to reflect the real challenges of doing this in governments (where CID focuses its PDIA work). Team members are pushing themselves to go beyond themselves in these exercises, and the name recognizes such.
  3. How we draw on action learning research, and our past experiments: Our approach builds on PDIA experience in places like Mozambique, Albania and South Africa, which has attempted to operationalize action learning ideas of Reg Revans (1980) and recent studies by Marquardt et al. (2009). These combined efforts identify learning as the product of programmed learning (which everyone is probably familiar with, and is often provided through organized training), questioning, and reflection (L=P+Q+R), which the PDIA process attempts to foster in the structure of each iteration (with action to foster experience, a check-in with simple questions about such experience, and an opportunity for reflection—facilitated by an external ‘coach’ figure). The questions asked in the PDIA check-in are much more abbreviated than those suggested by Revans and others, largely because experience with this work in busy governments suggests that there are major limits to the time and patience of officials, and asking more questions can be counter-productive (and lead to non-participation in the reflection process). The questions posed to teams are thus used to open opportunities for additional questions: like ‘who needed to be engaged and was not?’ or ‘why did you not do what you said you would?’ or ‘what is the main obstacle facing your team now?’ As the team progresses through iterations, they start to ask these more specified questions themselves, and come into the check-in reflection session with such questions in their own minds.

If you are interested in reading the Sri Lanka working paper, you can find the full version here

Active and adaptive planning versus set plans in PDIA

written by Matt Andrews

A colleague asked me two questions in response to last week’s blog on initiating PDIA:

  1. “It does not sound like you develop a thorough plan for action. Is this correct?”
  2. “How do you move from the workshop to action, and particularly to action learning?”

I will reflect on these questions in future blog posts, but today I will only address the first one.

It is probably easiest to say that we emphasize planning instead of plans when doing PDIA, where the former is about a process of engaging around a problem and the latter is simply on developing a documented step-by-step strategy.

We believe that planning allows for learning and interaction, by those who will do the work, and this is immensely useful.

We also see planning as an ongoing process, that is active and adaptive and does not happen in one moment or manifest in one document.

That is not to say we do not start with a defined planning exercise. We do.

The planning process is initiated in what we call the initial Launchpad event, which I described in the last blog posting. It occurs in about a day, where we (the in-country PDIA facilitators) work with a number of internal teams addressing festering problems in their governments.

The internal teams do the work at these events, and our PDIA folks just facilitate the process, taking the teams through aggressive sessions of constructing and deconstructing problems, identifying entry points for action, and actual action steps (as described in my prior blog and in the early sections of our new working paper on one of the teams in Sri Lanka).

This initial planning activity does generate a one page action agenda (or plan of sorts), which is intentionally short and simple. It includes the following information:

  • a description of the problem, and why the problem matters,
  • the causes of the problem,
  • what the problem might look like solved (and especially what this kind of result would look like in the time period we are working with—usually 6 months),
  • what the ‘indicative’ results might involve at the 4 month and 2 month periods (working backwards, we ask ‘where would you need to be to get to the 6 month ‘problem solved’ result?),
  • fully specified next steps (where the teams identify what they will do in the next two weeks and what they plan to do in the two weeks after that), and
  • what is assumed in terms of authorization of the next steps, acceptance of these next steps, and abilities to do the next steps (we want teams to specify their assumptions so we can track learn where they are right and wrong and adapt accordingly in future steps).

This kind of content will be familiar to those who know about our Searchframe concept. This content is pretty much the basis of the Searchframe. We don’t often have teams build the Searchframe itself, but it is a heuristic that allows us to work with some type of ‘plan’ but one that is not overly prescriptive and limiting.

This initial planning exercise is only the start of the work, however, and “you never end up where you start in PDIA.”

The exercise is the key to getting started, and its main goal is to empower a quick progression to action. Thus, the most important thing is the listing of key action steps to take next, and a date to come back and ask about how those action steps went, what was learned, and what will be next.

A ‘check in’ event data is usually set about two or three weeks in the future, and we inform the teams that they will be involved in a ‘push period’ between events. (This is similar to a ‘sprint’ in agile processes, but we think the idea of ‘pushing yourself and your organization’ is more apt in the governments we work with, so we call it a ‘push period’). The two to three week push period length provides enough time for teams to act on their next steps but also creates a time boundary necessary to promote urgency and build momentum.

The team meets in-between the events, and works to take the steps they ‘planned’. Then they meet at the check in and answer a series of questions: What did you do? What did you learn? What are you struggling with? What is next?

The answers to these questions provide the basis of learning and adaptation, and allow the teams to adjust their assumptions, next steps, and even (sometimes) expectations. They do this iteratively every few weeks, often finding that their adaptations become smaller over time (as they learn more and engage more, they become more sure of what they are doing and more clear about how to do it).

As such, the initial planning exercise is not the main event, and the initial ‘plan’ is not the main document—or even the final document. The event is just the start of an iterative action-infused planning process, where a loose plan is constantly being adapted until one knows where to go.

From this description, hopefully it is clear that we do foster planning and even a plan, but in ways that are quite different to common approaches to doing development:

  • The work is done by the internal teams, not external partners (consultants or people in donor agencies). We as the external PDIA facilitators may nudge thinking in some directions during the process, but the work is not ours. This is because ownership is a real thing that comes by owners doing the thing they must own, and when outsiders take the work from the owners it undermines ownership.
  • The initial planning exercise and one page plan is not perfect, and is often not as infused with evidence and data as many development practitioners hope it to be. I am an academic and I believe in data and evidence, but in the PDIA process we find that government teams often do not have the data or evidence at hand to do rapid planning, or they do not have the capability to use such. Waiting on data to develop a perfect plan slows the process of progressing to action and getting to the learning by doing. We have thus learned that it is better to not create a huge ‘evidence hurdle’ at the initial stage. We find that, where evidence and data matters, the teams often steer themselves to action around data sourcing, analysis etc. beyond planning. This allows them to find and analyze evidence as part of the process, and build lessons from such into their process.

[A note here is important, given that some recent commentaries have placed ‘using evidence’ and even using ‘big data’ as central to the PDIA process and Doing Development Differently. I love data, and think that there is a huge role for using evidence and big data in development, but I do not see how it is a central part of PDIA. Hardcore evidence based policymaking and big data analysis tends to depend on narrow groups of highly skilled insiders (or more commonly outsiders), the availability of significant amounts of data, and a fairly long process of analysis that yields—most commonly—a document as a result. These are not the hallmarks of effective PDIA and I would caution against making ‘access to evidence’ or ‘using Big Data’ as key signs that one is doing PDIA].

I hope this post has been useful in helping explain our thoughts on plans versus planning. Next week I will describe how we do the iterations in a bit more detail. Remember to read our book on these topics (it is FREE through open access) and read the new working paper on PDIA in action in one of the teams in Sri Lanka. We will produce more of these active narratives soon.

 

The “PDIA: Notes from the Real World” blog series

written by Salimah Samji

We are delighted to announce our new PDIA: Notes from the Real World blog series. In this series we will share our lessons from our PDIA experiments over the past five years, on how to facilitate problem driven, iterative and adaptive work . We will also feature some guest blog posts from others who are experimenting and learning from PDIA. We hope you will join us on this learning adventure!

Read the first blog post written by Matt Andrews here.

SearchFrames for Adaptive Work (More Logical than Logframes)

written by Matt Andrews

Although the benefits of experimental iteration in a PDIA process seem very apparent to most people we work with, we often hear that many development organizations make it difficult for staff to pursue such approaches, given the rigidity of logframe and other linear planning methods. We often hear that funding organizations demand the structured, perceived certainty of a logframe-type device and will not allow projects to be too adaptive.

In response to this concern, we propose a new logframe-type mechanism that embeds experimental iteration into a structured approach to make policy or reform decisions in the face of complex challenges. Called the SearchFrame, it is shown in the Figure below (and discussed in the following working paper, which also offers ideas on using the tool).

SearchFrame

The SearchFrame facilitates a transition from problem analysis (core to PDIA) into a structured process of finding and fitting solutions (read more about ‘Doing Problem Driven Work’). An aspirational goal is included as the end point of the intervention, where one would record details of ‘what the problem looks like solved’. Beyond this, key intervening focal points are also included, based on the deconstruction and sequencing analyses of the problem. These focal points reflect what the reform or policy intervention aims to achieve at different points along the path towards solving the overall problem. More detail will be provided for the early focal points, given that we know with some certainty what we need and how we expect to get there. These are the focal points driving the action steps in early iterations, and they need to be set in a defined and meaningful manner (as they shape accountability for action). The other focal points (2 and 3 in the figure) will reflect what we assume or expect or hope will follow. These will not be rigid, given that there are many more underlying assumptions, but they will provide a directionality in the policymaking and reform process that gives funders and authorizers a clear view of the intentional direction of the work.

The SearchFrame does not specify every action step that will be taken, as a typical logframe would. Instead, it schedules a prospective number of iterations between focal points (which one could also relate to a certain period of time). Funders and authorizers are thus informed that the work will involve a minimum number of iterations in a specific period. Only the first iteration is detailed, with specific action steps and a specific check-in date.

Funders and authorizers will be told to expect reports on all of these check-in dates, which will detail what was achieved and learned and what will be happening in the next iteration (given the SearchFrame reflections shown in the figure). Part of the learning will be about the problem analysis and assumptions underpinning the nature of each focal point and the timing of the initiative. These lessons will feed into proposals to adjust the SearchFrame, which will be provided to funders and authorizers after every iteration. This fosters joint learning about the realities of doing change, and constant adaptation of assumptions and expectations.

Readers should note that this reflection, learning and adaptation make the SearchFrame a dynamic tool. It is not something to use in the project proposal and then to revisit during the evaluation. It is a tool to use on the journey, as one makes the map from origin to destination. It allows structured reflections on that journey, and report-backs, where all involved get to grow their know-how as they progress, and turn the unknowns into knowns.

We believe this kind of tool fosters a structured iterative process that is both well suited to addressing complex problems and meeting the structural needs of formal project processes. As presented, it is extremely information and learning intensive, requiring constant feedback as well as mechanisms to digest feedback and foster adaptation on the basis of such. This is partly because we believe that active discourse and engagement are vital in a complex change processes, and must therefore be facilitated through the iterations.

 

Helping REAL Capacity Emerge in Rwanda using PDIA

written by Matt Andrews

What do you do if your government has been pursuing reforms for years, with apparent success, but your economy is still not growing? What do you do if the constraint seems to be the limited capacity of government organizations? What do you do if this capacity remains stubbornly low even after years of public sector reforms sponsored by outside partners and based on promising best practice ideas of fly-in-fly-out specialists?

A recent case study of work suported by the Africa Governance Initiative suggests an approach to just such a situation, faced by Rwanda in 2010. The approach is simple.

  1. Force your own people to look at festering problems in an up-close-and-personal manner, focusing on ‘why’ the problems persist instead of ‘what’ easy solutions outsiders might propose for the problems.
  2. Swarm the problems with new ideas emerging from those inside your government and from trusted outsiders committed to spending time adapting and translating their ideas to your context (instead of one-size-fits-all solutions coming from short-stay outsiders).
  3. Experiment in a step-by-step manner, actively, with the ideas, trying them out and seeing what works, why and with what kinds of nips and tucks.
  4. Learn. Yes, simply learn. Let people reflect on what they have done and absorb what made the difference and what they take from the experiences to carry to other tasks.

This is what I understand the Strategic Capacity Building Initiative (SCBI) was and is about. It is an approach to doing development that seems to have yielded some dynamic results in a relatively short period. The results are substantive, procedural, organizational, and personal. Farmer incomes are up after some of the experiments in the ‘pilot sites initiative’, for instance. The Energy Investment Unit has emerged as the focal point of a new process to increase energy generation and drive energy costs down. Perhaps more important, to me at least, is the fact that talented civil servants have done things they probably never dreamed they could. In my own language, they have become more adaptive—realizing that you can make a difference if you purposefully address real problems you face in an active, experiential and iterative manner. These young policy entrepreneurs and implementers will be in Rwanda for years to come, and hopefully long after SCBI ceases to exist as a program. They are the real success and legacy of the program.

I find this story line appealing. It tells of an approach to development that reflects the principles of problem driven iterative adaptation (PDIA). This SCBI approach is full of common sense but is also oddly revolutionary because it is such a contrast to the way development is commonly done (and, it seems, was done in some of these areas in Rwanda previously). The case shows that a locally problem driven, adaptive process works in complex developing country reform contexts and for this reason should be of interest to anyone working in development.

Not all is rosy and sweet in the story, however, which is true in all narratives on change and real functional reform—including all development narratives I think reflect the general principles of PDIA. There are a number of reflections on how hard it is to develop a real problem driven approach and allow flexibility in finding and fitting new solutions. The case notes that high level leaders demanded results particularly rapidly in some instances, for example, and this led to hurried action, mistakes and tension. The case suggests that this can be overcome with some common-sense ideas, like getting political authorizers to prioritize and ask policy people how long something will take and then agree on realistic time frames. In this respect, it also comments on the importance of focusing attention on a few key issues for deep dive attention rather than a slew of issues that ultimately only get a shallow look. As one Permanent Secretary notes, “Trying to do everything at the same time doesn’t work.” (Seems basic, doesn’t it, but this kind of mundane observation is one I see overlooked across the development agenda).

There are also hints at the importance of getting collaborative relationships right—with high-level authorizers and development partners engaging patiently and yet also expectantly with those in the policy and reform trenches. It seems there are real rewards when those at the top give those in the middle and even lower down the organization some structured space to prove their value. (Again, this seems basic, noting that ‘people’ really matter; but it is a vital observation in development where I thinking ‘policy’ is often seen as more important than people.) The importance of political coalitions and teams, incorporating outsiders and insiders, is also implicit and explicit and a vital take-away for those designing reforms and interventions. These coalitions and teams allowed natural coordination across boundaries (without having to change rigid organizational rules) and cross fertilization that is vital for the emergence of creative new policy ideas.

Beyond these ideas I was perhaps most struck by the lesson that this work is only achieved if people stick to it. It proved challenging and even uncomfortable to force politicians to prioritize, for instance, but this did not lead to the program falling apart. Instead, as the case notes, the SCBI team exerted “push back” on the system and made sure the prioritization was done. It proved hard to get the right people as well, but the SCBI team pivoted around this issue to get at least some of the right people and build on what they had. It proved tough to get disparate distributed agencies to work together and to even understand the importance of linkages, but the challenge was met with more determination. It was hard to take people through the uncomfortable process of problem analysis, where they interrogated existing processes to look for gaps (without jumping to quick but unlikely to work solutions). But the gap analyses went on nonetheless.

The literature on organizational change has a really academic term to describe the quality I think helped the SCBI reformers to stick to their guns when things got tough. It is ‘grit’ and it is vital for effective reform and change. It is the intangible thing that I think the SCBI story is really all about. It helped to keep the reforms going in the starting months that seemed slow and difficult, and it was what kept the growing SCBI team motivated when the actions they were taking were hard and constantly questioned as time consuming, demanding, politically uncomfortable, and (maybe even) downright impossible. Grit is what helps reformers turn setbacks into lessons, and what keeps reformers looking for the ‘right’ people needed to make something happen. It keeps people engaged in capacity building experiences that take time, personal sacrifice, and political capital. It is the magic ingredient behind real capacity building and change and is the one thing I hope other readers see in this case study, even with the other great practical ideas and embedded advice. The SCBI design and strategy was great, but it was the gritty commitment to make it work that really seems to have made the difference.

The lesson: Cultivate grit, don’t overlook it, as it is the key to capacity building success.

BSC video 35: Functional indicators as a way to build capacity

Data is an important part of state functionality. If states want to be capable they need to know where people are, how many people there are, etc. in order to deliver basic services. They need to measure functionality and ends rather than forms.

In this video, Matt Andrews, uses child registration to illustrate what a functional indicator means and how it has the potential to build the capacity to deliver services to its people. You can watch this video below or on YouTube.

If you are interested in learning more, read Getting real about governance and governance indicators and measuring success: means or ends?

BSC video 34: Measuring success – means or ends?

Do appearances matter as much as action does? We often forget that governments exist to do and not just to be. We thus focus on the means of being rather than the product of doing. This bias leads to governance indicators and reforms that emphasize perfection of means, often failing to make a connection to the ends or even clarifying which ends matter.
 
To address the fact that different ends might matter in different places at different times, and different ends might warrant different means, Matt Andrews, offers an ends-means approach, which begins by asking what governments do rather than how they do them. This approach of looking at the multi-dimensional nature of governance could be very useful in the current discussions about including a governance indicator in the post 2015 development goals. You can watch this video below or on YouTube.
 
 

BSC video 33: Multi-agent leadership in action

Who leads? How do they lead? When do they lead? Why do they lead? Answers to these questions are very important in understanding how change happens. Change is about people and people need to be led.

In this video, Matt Andrews, uses the story of Singapore to illustrate how true champions are the ones who convene, motivate, authorize and unleash multi-agent leadership. Leadership is about getting the right people to do the right things at the right time. You can watch this video below or on YouTube.

If you are interested in learning more about multi-agent leadership read going beyond heroic leaders in development, and watch who is the leader?

BSC video 32: Who is the leader?

In a study of successful change, we interviewed 150 people in 12 different places and asked them “who is the leader?” Instead of the listing the 12 champions, they responded with 107 names.

In this video, Matt Andrews, highlights that successful change requires someone to authorize the change, motivate, provide money, empower the people, define problems, suggest ideas for solutions, convene people, connect to others and to implement. All these things cannot be (and usually are not) done by one person. Change does not happen as implementation by edict it happens when groups of people take risks and make change happen. You can watch the video below or on YouTube.

If you are interested in learning more about multi-agent leadership read who really leads development? and going beyond heroic leaders in development.