During one of our first sessions, I remember Professor Andrews speaking about complex problems and the need to address these issues with a new approach typically not used by public policy professionals and government agencies. As he described the problems he has witnessed with the traditional “plan and control” implementation method, I thought, “Oh no. If there is anything I’m good at, it’s planning. And I’m a control freak.” [Insert wide-eyed emoji and head-exploding emoji here].
Professor Andrews and the team invited us into the PDIA world and encouraged us to give it a try with open minds. Boy, am I glad I did.
As the course started, I was working with an internal team to implement a financial assistance program for small businesses that had been economically impacted by the COVID-19 pandemic. Given the many uncertainties involved with the virus and the response to the public health emergency, I decided to use this situation as my implementation challenge.
We are delighted to announce our new PDIA: Notes from the Real World blog series. In this series we will share our lessons from our PDIA experiments over the past five years, on how to facilitate problem driven, iterative and adaptive work . We will also feature some guest blog posts from others who are experimenting and learning from PDIA. We hope you will join us on this learning adventure!
Read the first blog post written by Matt Andrews here.
Although the benefits of experimental iteration in a PDIA process seem very apparent to most people we work with, we often hear that many development organizations make it difficult for staff to pursue such approaches, given the rigidity of logframe and other linear planning methods. We often hear that funding organizations demand the structured, perceived certainty of a logframe-type device and will not allow projects to be too adaptive.
In response to this concern, we propose a new logframe-type mechanism that embeds experimental iteration into a structured approach to make policy or reform decisions in the face of complex challenges. Called the SearchFrame, it is shown in the Figure below (and discussed in the following working paper, which also offers ideas on using the tool).
The SearchFrame facilitates a transition from problem analysis (core to PDIA) into a structured process of finding and fitting solutions (read more about ‘Doing Problem Driven Work’). An aspirational goal is included as the end point of the intervention, where one would record details of ‘what the problem looks like solved’. Beyond this, key intervening focal points are also included, based on the deconstruction and sequencing analyses of the problem. These focal points reflect what the reform or policy intervention aims to achieve at different points along the path towards solving the overall problem. More detail will be provided for the early focal points, given that we know with some certainty what we need and how we expect to get there. These are the focal points driving the action steps in early iterations, and they need to be set in a defined and meaningful manner (as they shape accountability for action). The other focal points (2 and 3 in the figure) will reflect what we assume or expect or hope will follow. These will not be rigid, given that there are many more underlying assumptions, but they will provide a directionality in the policymaking and reform process that gives funders and authorizers a clear view of the intentional direction of the work.
The SearchFrame does not specify every action step that will be taken, as a typical logframe would. Instead, it schedules a prospective number of iterations between focal points (which one could also relate to a certain period of time). Funders and authorizers are thus informed that the work will involve a minimum number of iterations in a specific period. Only the first iteration is detailed, with specific action steps and a specific check-in date.
Funders and authorizers will be told to expect reports on all of these check-in dates, which will detail what was achieved and learned and what will be happening in the next iteration (given the SearchFrame reflections shown in the figure). Part of the learning will be about the problem analysis and assumptions underpinning the nature of each focal point and the timing of the initiative. These lessons will feed into proposals to adjust the SearchFrame, which will be provided to funders and authorizers after every iteration. This fosters joint learning about the realities of doing change, and constant adaptation of assumptions and expectations.
Readers should note that this reflection, learning and adaptation make the SearchFrame a dynamic tool. It is not something to use in the project proposal and then to revisit during the evaluation. It is a tool to use on the journey, as one makes the map from origin to destination. It allows structured reflections on that journey, and report-backs, where all involved get to grow their know-how as they progress, and turn the unknowns into knowns.
We believe this kind of tool fosters a structured iterative process that is both well suited to addressing complex problems and meeting the structural needs of formal project processes. As presented, it is extremely information and learning intensive, requiring constant feedback as well as mechanisms to digest feedback and foster adaptation on the basis of such. This is partly because we believe that active discourse and engagement are vital in a complex change processes, and must therefore be facilitated through the iterations.
There’s a character in a Moliere play who is surprised and delighted to learn that he has been speaking prose all his life without knowing it. I thought of him a couple of weeks into my new role as a part-time Professor in Practice in LSE’s International Development Department, when I realized I had been using ‘iterative adaptation’ to work out how best to keep 100+ Masters students awake and engaged for two hours last thing on a Friday afternoon.
The module is called ‘Research Themes in International Development’, a pretty vague topic which appears to be designed to allow lecturers to bang on about their research interests. I kicked off with a discussion on the nature and dilemmas of international NGOs, as I’m just writing a paper on that, then moved on to introduce some of the big themes of a forthcoming book on ‘How Change Happens’.
As an NGO type, I am committed to all things participatory, so ended lecture one getting the students to vote for their preferred guest speakers (no I’m not publishing the results). In order to find out how the lectures were going, I also introduced a weekly feedback form on the LSE intranet (thanks to LSE’s Lucy Pickles for sorting that out), and asked students to fill it in at the end of the session. The only incentive I could think of was to promise a satirical video (example below) if they stayed long enough to fill it in before rushing out the door – it seemed to work. The students were asked to rank presentation and content separately on a scale from ‘awful’ to ‘brilliant’, and then offer suggestions for improvements.
It’s anonymous, and not rigorous of course (self-selecting sample, disgruntled students likely to drop out in subsequent weeks etc), but it has been incredibly useful, especially the open-ended box for suggestions, which have been crammed full with useful content. The first week’s comments broadly asked for more participation, so week two included lots of breakout group discussions. The feedback then said, ‘we like the discussion, but all the unpacking after the groups where you ask what people were talking about eats up time, and anyway, we couldn’t hear half of it’, and asked for more rigour, so week three had more references to the literature, and 3 short discussion groups with minimal feedback – it felt odd, but seemed to work.
At this point, the penny dropped – I was putting into practice some of the messages of my week two lecture on how to work in complex systems, namely fast feedback loops that enable you to experiment, fail, tweak and try again in a repeat cycle until something reasonably successful emerges through trial and error. One example of failing faster – I tried out LSE’s online polling system, but found it was too slow (getting everyone to go online on their mobiles and then vote on a series of multiple choice questions) but also not as energising as getting people to vote analogue style (i.e. raising their hands). The important thing is getting weekly feedback and responding to it, rather than waiting til the end of term (by which time it will be too late).
The form is not the only feedback system of course. As any teacher knows, talking to a roomful of people inevitably involves pretty intense realtime feedback too – you feel the energy rise and fall, see people glazing over or getting interested etc. What’s interesting is being able to triangulate between what I thought was happening in the room/students’ heads, and what they subsequently said. Broad agreement, but the feedback suggested their engagement was reassuringly consistent (see bar chart on content), whereas my perceptions seem to amplify it all into big peaks and troughs – what I thought was a disastrous second half of lecture two appears to have just been a bit below par for a small number of students.
The feedback also helps crystallize half-formed thoughts of your own. For example, several complained about the disruption of students leaving in the middle of the lecture, something I also had found rather unnerving. So I suggested that if people did need to leave early (it’s last thing on Friday after all), they should do so during the group discussions – much better.
What’s been striking is the look of mild alarm in the eyes of some of my LSE faculty colleagues, who warned against too much populist kowtowing to student demands. That’s certainly not how it’s felt so far. Here’s a typical comment ‘I think that this lecture on the role of the state tried to take on too much. This is an area that we have discussed extensively. I think it would have been more useful to focus on a particular aspect, perhaps failed and conflict-affected states since you argue that those are the future of aid’. Not a plea for more funny videos (though there have been a few of those), but a reminder to check for duplication with other modules, and a useful guide to improving next year’s lectures.
What is also emerging, again in a pleasingly unintended way, is a sense that we are designing this course together and the students seem to appreciate that (I refuse to use the awful word co-create. Doh.) Matt Andrews calls this process ‘Problem Driven Iterative Adaptation’ – I would love to hear from other lecturers on more ways to use fast feedback to sharpen up teaching practices.