The problem with Cognitive Load Theory
This article is my own summary interpretation of the paper; Cognitive Load Theory, what does it mean for learning designers? By Walkergrove 2014.
Cognitive Load Theory is a well researched, well proven and generally unchallenged practice of instruction that demonstrates a strong and lasting influence on learners in many educational situations such as when complex tasks or large pieces of information need to be processed. But it is far from being a universal model for all teaching subjects and there is considerable debate over many of its principles.
One of the most significant criticisms of CLT is that researchers are unable to standardise a method of measurement for what constitutes cognitive load. Participants of the studies are asked to rate their load on a scale or physiological measurements are used (physical reactions to load). Both methods are highly subjective and vary from participant to participant. What constitutes cognitive overload or underload to me might not apply to you. In a class full of learners of differing abilities how do you tell? What is stressful for me is a thrill for someone else. One effective solution might be for each learner to have control of the cognitive load themselves (metacognition and independent learning?).
Cognitive load describes the burden placed upon the working memory by a task or information. It dates back to George Miller’s paper ‘the magical number seven’ from 1956. He thought that our short term memory is only able to process seven items before a decrease in retention but recent research has lowered this to about four for most learners. Learners can experience cognitive overload or cognitive underload. In the 1980’s educational psychologist John Sweller used empirical studies of information processing to identify a set of principles that formed his theory of cognitive overload or CLT. His aim was to identify more effective ways of teaching maths and science and he proposed that a lack of learning occurs when the total amount of load induced by the learning environment exceeds the capacity of the learner. Other researchers quickly applied this to other areas where instructional learning takes precedent.
CLT assumes that working memory has a limited processing capacity, that long term memory is responsible for holding large amounts of information over longer periods of time and that people organise, understand and categorise information into constructs of information called Schemas.
Sweller identified three types of cognitive load; intrinsic (the difficulty of the task) extraneous (avoidable external information) and germane load used to construct Schemas. In CLT, the sum total of these working loads must not exceed the capacity of working memory.
Proponents of CLT quickly considered it a universal theory to learning across all domains, media and learners. However, further reviews of the evidence submitted by CLT researchers Clark, Nguyen and Sweller (2006) found that every study they used related to sciences, maths or complex processes and many of the lessons examined were very short. There has been a lack of research into the longer-term performance of those who have learned using CLT. The experimental data presented in the studies is in question also since the tests were all carried out immediately after the lesson and do not measure the long term effects of CLT. It may be that CLT is most effective for cramming before a test for example.
The principle studies that led to the development of CLT were only involved with the teaching of complex mathematical and scientific problems. Yet it has been universally applied to all instruction without significant further research into its application to non-scientific or process-driven problems.
Similarly there is a lack of research into long-term gains through the application of CLT principles.
Much of CLT has been adopted by eager enthusiasts uncritically yet many questions remain. The technique of chunking information is widely used for example yet there is little guidance on what a manageable sized chunk is and in any case this is surely likely to vary from student to student. Of course as a teacher I would want to break down complex information into smaller parts, but how small? Are the sizes the same for everybody? Another issue in question is whether the intrinsic cognitive load can be altered through instructional elements.
What is also relevant is the phenomenon of expertise reversal: that pitching your lessons at low levels actually depresses learning for those with more expertise. (Common sense?) This means that creating materials that have a low difficulty level could cause similar problems as cognitive overload. So it might seem that differentiation of learning is actually counter-productive to the whole class’s learning. This might be true, but again, these studies compared the whole class’s learning outcomes with lower level instruction. Targeted differentiation for isolated groups weren't measured and again, we’re left with the feeling that the researchers are generalising whole group trends.
The conclusion could be drawn that instructional materials cannot be the same for all learners, that there must be varying degrees of difficulty. If you pitch your lessons too low people won’t learn enough, pitch it too high and they will be overloaded. (Isn’t this just what every teacher knows anyway?)
To paraphrase: CLT has been shown to be very effective for teaching complex information. By paraphrasing complex information into smaller Schemas you can avoid cognitive overload and increase the amount learned. By breaking information into smaller manageable sections and ensuring that these sections are the right size for learners to manage (the Goldilocks syndrome) you should increase the effect of your teaching. However, CLT isn’t proven to work for other aspects of learning and in any case, cognitive load is difficult to measure and varies from person to person.