Disruptive innovation is the key to long-term excellence, even though its early results are less glamorous than those of approaches that seek to be at the “cutting edge” of innovation.
The term “disruptive innovation” is thrown around a lot in policy discussions about education system reform. However, it is important to recognise that, when this term was introduced by the late Clay Christensen (The Innovator’s Dilemma, 1997), it had a specific meaning which is diametrically opposite to the meaning that many people now ascribe to it. This is not a purely semantic point; the original idea of “disruptive innovation” is a key concept that can help solve the problems facing low performing education systems.
I briefly recap the way in which Professor Christensen used the term and discuss how this concept relates to current debates in the education sector.
There’s a character in a Moliere play who is surprised and delighted to learn that he has been speaking prose all his life without knowing it. I thought of him a couple of weeks into my new role as a part-time Professor in Practice in LSE’s International Development Department, when I realized I had been using ‘iterative adaptation’ to work out how best to keep 100+ Masters students awake and engaged for two hours last thing on a Friday afternoon.
The module is called ‘Research Themes in International Development’, a pretty vague topic which appears to be designed to allow lecturers to bang on about their research interests. I kicked off with a discussion on the nature and dilemmas of international NGOs, as I’m just writing a paper on that, then moved on to introduce some of the big themes of a forthcoming book on ‘How Change Happens’.
As an NGO type, I am committed to all things participatory, so ended lecture one getting the students to vote for their preferred guest speakers (no I’m not publishing the results). In order to find out how the lectures were going, I also introduced a weekly feedback form on the LSE intranet (thanks to LSE’s Lucy Pickles for sorting that out), and asked students to fill it in at the end of the session. The only incentive I could think of was to promise a satirical video (example below) if they stayed long enough to fill it in before rushing out the door – it seemed to work. The students were asked to rank presentation and content separately on a scale from ‘awful’ to ‘brilliant’, and then offer suggestions for improvements.
It’s anonymous, and not rigorous of course (self-selecting sample, disgruntled students likely to drop out in subsequent weeks etc), but it has been incredibly useful, especially the open-ended box for suggestions, which have been crammed full with useful content. The first week’s comments broadly asked for more participation, so week two included lots of breakout group discussions. The feedback then said, ‘we like the discussion, but all the unpacking after the groups where you ask what people were talking about eats up time, and anyway, we couldn’t hear half of it’, and asked for more rigour, so week three had more references to the literature, and 3 short discussion groups with minimal feedback – it felt odd, but seemed to work.
At this point, the penny dropped – I was putting into practice some of the messages of my week two lecture on how to work in complex systems, namely fast feedback loops that enable you to experiment, fail, tweak and try again in a repeat cycle until something reasonably successful emerges through trial and error. One example of failing faster – I tried out LSE’s online polling system, but found it was too slow (getting everyone to go online on their mobiles and then vote on a series of multiple choice questions) but also not as energising as getting people to vote analogue style (i.e. raising their hands). The important thing is getting weekly feedback and responding to it, rather than waiting til the end of term (by which time it will be too late).
The form is not the only feedback system of course. As any teacher knows, talking to a roomful of people inevitably involves pretty intense realtime feedback too – you feel the energy rise and fall, see people glazing over or getting interested etc. What’s interesting is being able to triangulate between what I thought was happening in the room/students’ heads, and what they subsequently said. Broad agreement, but the feedback suggested their engagement was reassuringly consistent (see bar chart on content), whereas my perceptions seem to amplify it all into big peaks and troughs – what I thought was a disastrous second half of lecture two appears to have just been a bit below par for a small number of students.
The feedback also helps crystallize half-formed thoughts of your own. For example, several complained about the disruption of students leaving in the middle of the lecture, something I also had found rather unnerving. So I suggested that if people did need to leave early (it’s last thing on Friday after all), they should do so during the group discussions – much better.
What’s been striking is the look of mild alarm in the eyes of some of my LSE faculty colleagues, who warned against too much populist kowtowing to student demands. That’s certainly not how it’s felt so far. Here’s a typical comment ‘I think that this lecture on the role of the state tried to take on too much. This is an area that we have discussed extensively. I think it would have been more useful to focus on a particular aspect, perhaps failed and conflict-affected states since you argue that those are the future of aid’. Not a plea for more funny videos (though there have been a few of those), but a reminder to check for duplication with other modules, and a useful guide to improving next year’s lectures.
What is also emerging, again in a pleasingly unintended way, is a sense that we are designing this course together and the students seem to appreciate that (I refuse to use the awful word co-create. Doh.) Matt Andrews calls this process ‘Problem Driven Iterative Adaptation’ – I would love to hear from other lecturers on more ways to use fast feedback to sharpen up teaching practices.
Since we published The DDD Manifesto on November 21, it has been viewed over 5,000 times all around the world (in 100+ countries). It currently has over 400 signatories from 60 countries. It is an eclectic community with people from bilateral organizations, multilaterals, governments, academia, NGOs, private sector, as well as independent development practitioners. These are the founding members of The DDD Manifesto Community.
Today, we are delighted to launch the online platform of the DDD Manifesto Community which is the new home of the manifesto. We hope that this will be a place where you can come to share ideas, have conversations, question your assumptions, learn from others, offer support and be inspired. It includes a forum for discussion, blog posts written by community members and features video presentations from the recent DDD workshop (#differentdev).
To sign the manifesto and to participate in the forum, you can register here. Please contribute actively – this is a community website and you are the community.
Arnaldo Pellini recently wrote an interesting personal blog post about the Doing Development Differently workshop and manifesto. He concludes with, “I agree with these ideas and I can share and discuss these ideas with the team with whom I work but what difference can it make if the systems around us due to organizational culture, history, circumstances, and traditions struggle to embrace flexibility, uncertainty, untested experimentation, and slow incremental changes?”
This is an honest reflection from a practitioner in the field; and one that I hear often–from folks working in multilateral and bilateral agencies, as contractors, and beyond. It captures a concern that the development machinery (organizations, monitoring and reporting devices, profession-alliances, government counterparts, etc.) is structurally opposed to doing the kind of work one might call DDD or PDIA.
It’s like this cartoon…where our organizations say “let’s innovate but stay the same.”
I have been thinking about this a lot in the last few years, ever since I wrote chapter ten of my book…which asked whether the development community was capable of changing. In that chapter I was not especially confident but (I hope) I was still hopeful.
Since then, I think I’m more hopeful. Partly because,
we have found many folks in the multilaterals, bilaterals, contractors etc. who are doing development in this more flexible way. We invited a range of them to the DDD Workshop and over 330 signed on to the DDD Manifesto. One of the goals of our work in the next while is to learn from these folks about HOW they do development differently even with the constraints they face. How do they get funders to embrace uncertainty? How do they get ministers in-country to buy-into flexibility and give up on straight isomorphism?
I am also working on research projects that tackle this question; doing PDIA in real time, in places where development is predominantly done through the incumbent mechanisms. It is hard work, but I am finding various strategies to get buy-in to a new approach (including showing how problematic the old approach is, by working in the hardest areas where one has a counterfactual of failed past attempts, and more). I am also finding strategies to keep the process alive and buy more and more space for flexibility (by iterating tightly at first, for instance, and showing quick wins…and telling the story of learning and of increased engagement and empowerment). So far, I have not experienced complete success with what I have done, but I have certainly not struggled in getting support from the practitioners and authorisers we work with. (In my world it is harder to get support from academics, who think action research on implementation is a hobby and consultancy work… indeed, anything that does not say ‘RCT’ is considered less than academic. Sigh.)
All this is to say that I think Arnaldo is emphasizing a really important constraint on those working in development agencies. But a constraint that we should work through if we really do agree that these more problem driven, flexible approaches are what is needed. To Arnaldo and others I would suggest the following:
Separate the conversation about which way we should do development from the conversation about how much our organizational realities ALLOW us to do it. The first conversation is: “Should we do DDD/PDIA?” The second conversation is: “How do we DDD/PDIA?” If we conflate the conversations we never move ahead. If we separate them then we can develop strategies to gradually introduce PDIA/DDD into what we do (in essence, I’m suggesting doing PDIA ourselves, to help change the way we do development…see an earlier blog).
I also constantly remind myself that we (external folks in development organizations) are not the only ones facing a challenge of doing new stuff in existing contexts–with all the constraints of such. This is what we are asking of our counterparts and colleagues in the developing countries where we work. Dramatic and uncomfortable and impossible change is in the air every time we are introducing and facilitating and supporting and sponsoring work in developing countries. I always tell myself: “If we can’t work it out in our own organizations–when we think that our own organizational missions depend on such change–then we have no place asking folks in developing countries to work it out.”
So, it’s a challenge. But a worthy one. And if we care about doing development with impact, I think it behooves us to face up to this challenge.
Good luck, Arnaldo, thanks for your honesty and for the obvious commitment that causes you to share your reality. It is really appreciated!
Need help decoding the acronym PDIA? Check out the PDIA anthem.
This Anthem uses the Instrumental from Mos Def – Mathematics. It was made by a very talented student as part of an assignment for Matt Andrews course entitled Getting Things Done in Development. We had never imagined that we could write a song about PDIA, let alone a rap. Thank you.
In late October, a group of about 40 development professionals, implementers and funders from around the world attended the DDD workshop, to share examples where real change has been achieved. These examples employ different tools but generally hold to some of the same core principles: being problem driven, iterative with lots of learning, and engaging teams and coalitions, often producing hybrid solutions that are ‘fit to context’ and politically smart.
The two-day workshop was an opportunity to share practical lessons and insights, country experience, and to experiment first hand with selected methodologies and design thinking. In order to maximize the opportunity to hear from as many people as possible, all presenters were asked to prepare a 7:30 minute talk — with no powerpoints or visual accompaniments. The workshop alone generated a rich set of cases and examples of what doing development differently looks like, available on both Harvard and ODI websites (where you can watch individual talks, see the posters or link to related reports).
The aim of the event was to build a shared community of practice, and to crystallize what we are learning about doing development differently from practical experience. The workshop ended with a strong call for developing a manifesto reflecting the common principles that cut across the cases that were presented. Watch the closing remarks here.
These common principles have been synthesized into The DDD Manifesto. We recognize that many of these principles are not new, but we do feel the need to clearly identify principles and to state that we believe that development initiatives will have more impact if these are followed.
As an emerging community of practice, we welcome you to join us by adding your name in the comment box of the manifesto.
I recently blogged about what matters about the context. Here’s a video of a class I taught on the topic at the University of Cape Town over the summer (their winter). It is a short clip where I try to flesh out the 4 factors that I look at when thinking about new policy: 1. Disruption; 2. Strength of incumbents; 3. Legitimacy of alternatives; and 4. Agent alignment (who is behind change and who is not).
Most development practitioners think that they are working on problems. However, what they often mean by the word ‘problem’ is the ‘lack of a solution’. This leads to designing typical, business as usual interventions, without addressing the actual problem. Essentially, they sell solutions to specific problems they have identified and prioritized instead of solving real and distinct problems.
If the problem identification is flawed, then it does not matter whether you do a gold standard RCT or not, you will neither solve the problem nor learn about what works. Here’s a great example. A recent paper entitled, The permanent input hypothesis: the case of textbooks and (no) student learning in Sierra Leone found that a public program providing textbooks to primary schools had no impact on student performance because the majority of books were stored rather than distributed.
Could they not have learned that the textbooks were being locked up, cheaper and faster, through some routine monitoring or audit process (which could have led to understanding why they were locked up and then perhaps trying to find other ways to improve access to the textbooks – assuming that was their goal)? Was an RCT really necessary? More importantly, what was the problem they were trying to solve? What was their causal model or theory of change? If you provide textbooks to children then learning outcomes will improve?
Interestingly, the context section of the paper mentions that “the civil war severely impacted the country’s education system leading to large-scale devastation of school infrastructure, severe shortages of teachers and teaching materials, overcrowding in many classrooms in safer areas, displacement of teachers, frequent disruptions of schooling, psychological traumaamong children, poor learning outcomes, weakened institutional capacity to manage the system, and a serious lack of information and data to plan service provision.” In addition, they also found variance between regions and in one remote council, “less than 50 percent of all schools were considered to be in good condition, with almost 20 percent falling under the category “no roof, walls are heavily damaged, needs complete rehabilitation.”
Honestly, in a complex context like this, it isn’t clear or obvious that providing textbooks would make much difference even if they were handed out to the children, especially since they are written in English. Apparently, the teachers teach in Krio in the early years and then switch to English in Grade 4 and 5. Based on the context above, that sounds more like fiction than fact.
In environments like these, real problems are complex and scary, and it is easier to ignore them than to address them. A possible way forward could be to break the problem down into smaller more manageable pieces using tools like problem trees, the ishikawa diagram and the ‘5 whys.’ Then design an intervention, try, learn, iterate and adapt.
As you may have noticed, our website was antiquated, to say the least. The task of giving it a makeover has been on the back burner for a while now. We are proud to finally announce that our new website is live!
Yesterday I blogged about Hirschman’s Hiding Hand. As I interpret it, a central part of his idea is that many development projects:
focus on solving complex problems, and
only once they have started does a ‘hiding hand’ lift to show how hard the problem is to solve,
but because policy-makers and reformers are already en route to solving the problem they don’t turn away from the challenges, and
so they start getting creative and finding ways to really solve the problem. Initial plans and designs are shelved in favor of experiments with new ideas, and after much muddling the problem is solved (albeit with unforeseen or hybrid end products).
I like the argument. But why do I see so many development projects that don’t look like this?
I see projects where solutions or projects are introduced and don’t have much impact, but then they are tried again and again–with processes that don’t allow one to recognize the unforeseen challenges, and rigid designs that don’t allow one to change or experiment or pivot around constraints and limits. Instead of adjusting when the going gets tough, many development projects carry on with the proposed solution and produce whatever limited form is possible.
I think this is because many reforms are not focused on solving problems; they are rather focused on gaining short-run legitimacy (money and support) which comes through simple promises of quick solutions. This is the most rank form of isomorphism one can imagine; where one mimics purely for show… so you get a ‘fake’ that lacks the functionality of the real thing…
Let me use Public Financial Management (PFM) reforms as an example.
What problems do these reforms try to solve? Quite a few, potentially. They could try to solve problems of governments overspending, or problems of governments not using money in the most efficient and effective manner (and ensuring services are delivered), or of governments using money in ways that erode trust between the state and citizens (and more).
Now, let me ask how many reforms actually examine whether they solve these problems? Very few, actually. Mostly, reforms ask about whether a government has introduced a new multi-year budget or an integrated financial management system. Or a new law on fiscal rules, or a new procurement system.
Sometimes the reforms will ask questions about whether fiscal discipline is improved (largely because this is something outsiders like the IMF focus on) but I seldom see any reforms–or any PFM assessments (like PEFA or even the assessments of transparency) asking if services are better delivered after reforms, or if reforms enhance trust between citizens and the state. I don’t even see efforts to systematically capture information about intermediate products that might lead to these ‘solved problems’. For instance:
Do we have evidence that goods are procured and delivered more efficiently (time and money-wise) after reform?
Do we have any systematic data to show that our new human resource management systems are helping ensure that civil servants are present and working well, and that our new payment systems pay them on time (and do a better job of limiting payments to ghost workers)?
Do we have any consistent evidence to show that suppliers are paid more promptly after reforms?
Is there any effort to see if IT systems are used as we assume they will be used, after reforms?
Does anyone look to see if infrastructure projects are more likely to start on time and reach completion after costly project management interventions?
Do we have records to show that infrastructure receives proper maintenance after reform?
Is there any effort to see if taxpayers trust government more with their money?
This is a long list of questions (but there are many more), and I am sure that some reforms do try to capture data on some of them (if you’ve measured these in a reform, please comment as such…it would be interesting and important to know). Most reforms I have observed don’t try to do it at all, however, which was the focus of a recent discussion on the role of PFM and service delivery Time to Care About Service Delivery?Specialists from around the world were asked whether PFM reforms improve service delivery and the answer was “we think so…we expect so…we hope so…BUT WE CAN’T TELL YOU BECAUSE WE DON’T ACTUALLY ASK EXPLICIT QUESTIONS ABOUT THIS.”
My concern with this is manifold: (i) Does the failure to ask if we are solving the problems suggest that we as a community of reformers don’t really care about the problems in the first place? (ii) Does it mean that we will not be sensitive to the situations Hirschman speaks about when he discusses unforeseen challenges that undermine our ability to address problems (simply because we don’t focus on the problems)? (iii) Does this also mean that we will not have any moments where we explore alternatives and experiment with real solutions that help to overcome hurdles en route to solving problems?
Unfortunately, I think the observations of gaps after reforms speak to all of these interpretations. And this is why many reforms and interventions do not end up solving problems. In these cases, we get the half-baked versions of the pre-planned solution…with no adjustment and no ‘solved problem’. PFM systems look better but still don’t function–so payments remain late, wages are unpaid to some and overpaid to many, services are not delivered better, and trust actually declines. Most worrying: we have spent years doing the reforms, and now need to pretend they work..and have no learning about why the problems still fester.
The solution (maybe): In my mind this can be rectified–and we can move towards producing more projects like those Hirschman observed–by
focusing reforms on problems, explicitly, aggressively, from the start;
measuring progress by looking at indicators of ‘problem solved’(like improved levels of trust after PFM reforms) and intermediate indicators we think will get us there (better payment of contracts, more efficient procurement, etc;
regularly monitoring this progress;
being on the lookout for expected unexpecteds (things that we didn’t know about that make our initial solutions less impactful); and
being willing to adjust what we started with to ensure we produce real solutions to real problems–functional improvements and not just changes in form.
For more, read This is PFM which advocates a functional approach to thinking about and doing PFM reform.