This blog posting is third in a series hosted by ProPEL Matters to examine how big and small data is implicated in changes to professional practices and increasing datafication of work and professional learning.
In his seminal work on the history of the arcades of Paris, Walter Benjamin wrote that his aim for this project was ‘to discover in the analysis of the small individual moment the crystal of the total event’ (1999, p.461).
That what is important can be found in the examination of the very small (while conversely, a myopic focus on what is big may yield only the illusion of certainty) is a lesson we seem to have forgotten, such is our current enchantment with big data.
This is certainly the case in my own discipline, education research, which is showing a resurgence of interest in the ‘randomised control trial’ (RCT). Though for a long time considered the ‘gold standard’ in medical research RCTs have not been common in education but are finding their niche in the ‘What works’ agenda of evidence-based policy.
Education research has long been subject to the criticism that its focus on small scale, primarily qualitative research, has failed to deliver a cumulative body of proven knowledge of use to policy-makers and practitioners, so perhaps we only have ourselves to blame. But RCTs in education do not have an unimpeachable record either, often yielding contradictory findings and so leaving politicians a free hand in the construction of ‘policy-based evidence’.
RCTs, of course, do not look for the ‘small individual moment’ and so are blind to the theoretical insights to be gained from such close-grained analysis (but it is annoying that proponents of RCTs are so blinded that they don’t even acknowledge it as a limitation of their methodology).
What is particularly interesting in small-scale research is the value of the outlier – the single case that goes against the grain. This emerged in our study of a group of teachers undertaking an online professional masters programme, which was explicitly designed to foster the attribute of criticality (the ‘oPEN’ project, Watson et al, 2016).
One of our students failed both the summative assignment and the one allowable resubmission and so failed the course overall. This case became the focus of our study. We published our findings in a paper entitled, Small data, online learning and assessment practices in higher education: A case study of failure? (Watson et al, 2016). In outlining the paper we were at pains to point out that the object of the case study was practices of assessment in higher education. We therefore did not seek to address the question, ‘Why did this particular student fail?’ but rather, ‘How does an examination of this case contribute to knowledge around practices of assessment?’
Although only a single case, we drew on a number of data-sets: the learning analytics gathered which indicated how our student had engaged with the resources and activities required of the course (Wilson et al 2017) (none of which flagged up this particular student as at risk); the summative assignment; and a group interview following completion of the course.
The failure we referred to in the title of the paper, of course, was ours rather than our student’s. Or, rather it was not ours personally – we impeccably applied the lessons of best practice in assessment, notably in supplying formative feedback that students are demonstrably able to act on – but the failure of higher education to assess what it claims to value.
More than this, our analysis revealed a dialectical tension between masters learning as practice and masters learning for (professional) practice. Paradoxically (like Gilbert and Sullivan, I love a good paradox), the practices that mark out the successful masters student may actually undermine the development of the kinds of practices that make for the skillful practitioner.
This finding goes beyond learning in the online environment (and indeed beyond teacher professional learning) and addresses issues concerning higher education conceived as an ethical project more broadly. The issues are complex and are discussed more fully in the paper. The point is, that such a finding could not have emerged from a RCT.
Support for case study research has come from some unlikely quarters. Flyvbjerg (2006) quotes noted psychologist Hans Eysenck who underwent an astonishing u-turn, from a dismissive view of case study as little more than a collection of anecdotes, to the realisation that ‘sometimes we simply have to keep our eyes open and look carefully at individual cases – not in the hope of proving anything, but rather in the hope of learning something’.
Benjamin, W. (1999). The Arcades project. Cambridge, MA: Belknap Press.
Flyvbjerg, B. (2006). Five Misunderstandings about case-study research. Qualitative Inquiry, 12(2), 219–245.
Watson, C., Wilson, A., Drew, V., & Thompson, T. L. (2016). Small data, online learning and assessment practices in higher education: A case study of failure?. Assessment & Evaluation in Higher Education, 1-16. DOI: 10.1080/02602938.2016.1223834
Wilson, A., Watson, C., Thompson, T.L., Drew, V. & Doyle, S. (2017) Learning analytics: Challenges and limitations. Teaching in Higher Education. DOI: 10.1080/13562517.2017.1332026
This post has been viewed 186 times.