Big Data and RCTs: Do we have the evidence we need?
Updated: Sep 7, 2018
Big Data is one of the Buzzwords of our time. Technology has made it convenient to analyze huge data sets in easy ways. More and more data are generated and often a lack of data is said to inhibit our possibilities to craft policy that fits well into reality and tackles core issues.
At the same time, Randomized Controlled Trials (RCTs) have also gained ground in social sciences. Not at last thanks to the Abdul Latif Poverty Action Lab (J-PAL), this method has enormous influence. Often seen as the Gold Standard and finally a truly objective way of measuring reality and the impact of social programs, it has earned some harsh criticism recently. Indeed, some reputed study suggests that they inevitably produce bias.
So who is right? As usual, this question is not so easy to answer. But certainly neither Big Data nor RCTs make critical thinking abundant. We cannot put on an auto-pilot and let 'hard', data-driven evidence drive our reforms. The driver seat remains reserved for us, the thinking human being.
Big Data are also an issue because the data we work with are often not valid. The recent National Achievement Survey is highly doubtful as an instrument to measure learning. In this year's RISE Annual Conference, a paper by Abhijeet Singh and Karthik Muralidharan showed that it also matters who generates data on learning outcomes. In their sample, children answered less than 30% correctly in an independent assessment, but about 70% in the official one. We all know the rampant cheating and often perverse incentives resulting from performance oriented (thin) accountability reforms. Somehow all these measures have not prevented the governmental school system to implode. We would even argue that is has accelerated things.
Why so? In the run for Big Data and RCTs, economics and New Public Management have gained prominence. This happened at the expense of seemingly more ‘soft’ disciplines like psychology, anthropology, ethnography, sociology and social work. We think that thick narratives and interdisciplinary understanding are crucial to get entry points for honest reforms. Further, we think experimentation and judgement should have the high ground over blueprints and RCT-based policies. This idea has gained some popularity recently, also thanks to the Center of International Development at Harvard, which pushed such non-blueprint, experimental iterations into the focus of the debate. Dan Honig's recent publication “Navigation by Judgement” might strengthen this trend.
In India, the Accountability Initiative has done some groundbreaking work in looking into the administration from an ethnographic point of view. With interviews and interactions with frontline bureaucrats, thick narratives of the Post Office Paradox have enriched our understanding of the inner workings of the bureaucracy.
You can listen to an interview with Yamini Aiyar, the President and Chief Executive of the Center for Policy Research (of which the Accountability Initiative is a part), here:
We think this is the right way to go. Thick narratives, critical understanding, and a thoughtful, research-informed approach to reform can be the key to real change. Much too often, reforms are not really reforms, but add-ons that never get integrated into the mainstream functioning of the system.
There are difficult questions ahead:
Why are so many teachers who are supposed to teach not teaching?
Why is the administration so unresponsive to citizen demands?
Why are parents not actively participating in many schools?
What are the perceptions, feelings, fears, wishes and motivations of people in the system?
How can these things be positively influenced?
We think much too often, people in the system were diminished to rational, self-seeking and egoistic objects of reform processes. Performance-based incentives, biometric attendance systems, or Massive Open Online Courses (MOOCs) for teacher education have not led to the hoped improvements. We think the standard toolbox of the behavioral economist has been used for long enough. We should also welcome other disciplines to a more deliberate search process for fitting solutions.
Nature is complex. So are people and systems made of people.
Similar to the Washington Consensus in economic development, the standard recipes of what Pasi Sahlberg, a Finnish researcher now with the Harvard Graduate School of Education, called the Global Education Reform Movement (GERM), have failed. They did not improve learning in our schools. Not only that, they might also have done damage to the intrinsic motivation and public commitments of many people in government, administration and the teaching community.
Sahlberg has a few lessons to offer:
It is time to ask the big and difficult questions. Yet, we think big questions often need small data. Getting thick narratives from the people who constitute our education system are a good starting point. Diverse perspectives can help to uncover the complex relationships. We should all be a little more humble if we want to honestly reform such a complex system like our schools.