In our last two parts, we looked at how basics of research and experimental design can impact the strength of a study’s results—and how much we can trust results when determining which programs and strategies to implement in our schools.
Nothing might be trendier today than programs and strategies that profess to be “brain-based.” Seems like a cut-and-dry solid reason to implement something in your school, right? You can’t argue with fMRI results.
Well…actually, you can.
Don’t misunderstand: brain research is fascinating and yes, it yields much information that we never had in the past. There’s definitely a place for it. Like all research, however, it has its limitations. Check out this recent TED article revealing major new fMRI concerns (and since this is a summary of multiple studies on fMRI, you’ll remember to try and read the actual studies for yourself, right?)
Let’s look at the basics of how fMRI works. It’s basically a giant tube that a participant goes into, and the researcher asks the participant to think about a particular subject, or asks the participant a question, or so on. The fMRI machine then looks at blood flow in the brain. Depending on which areas “light up,” or have more blood flowing, researchers can conclude that the participant is using that area of his or her brain more than other areas.
Of course, the fMRI machine can’t actually get down to the level of showing what individual neurons are doing in the brain. Each area of the brain is also made up of thousands of neurons…so even though a certain section of the brain might “light up” while the participant thinks about the researcher’s prompt, we don’t actually know which neurons are lighting up or their individual roles.
In other words, the fMRI results can see the forest, but not the individual trees.
Meanwhile, there’s a lot of room open to interpretation with an fMRI. For instance, a study by Marco Lacoboni out of UCLA, for instance, found that middle-of-the-road voters had their amygdala light up when they were shown the words “Democrat,” “Republican,” and “independent.” Normally, this would indicate feelings of anxiety and disgust.
The problem? Well, other areas of the brain associated with reward, desire, and connectedness, also lit up when they heard those words.
Find that confusing? That’s an inherent problem with a lot of today’s brain research. An fMRI can tell us what the brain is doing, but it can’t tell us why, and it can’t always make much meaning out of the actions.
Moreover, if our goal is raising student achievement, brain research can’t tell us much about that at all. Yes, it can tell us that when students are asked intriguing questions that they report made them curious, the reward centers of participants’ brains tend to light up. That doesn’t necessarily mean that it will increase student achievement scores, however.
Therefore, it always works best to couple any brain research with actual educational studies that look at the impact of the program or strategy on student achievement—if that’s your goal. For instance, in the case of questioning (a strategy examined on some of the eObservation forms), we know that in his 2009 book Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement, John Hattie found that questioning, on average, tends to yield an effect size of .46 (which can be translated into an 18 percentile point gain, on average, over students who are not engaged in the strategy).
What, then, is our takeaway? It seems like almost all educational research has limitations, whether it’s a small sample size, confusion over correlation versus causation, or the fallibility of even “harder science,” like brain scans.
What that means, then, is that it’s the duty of school leaders not to rely on one or two sources of information when determining what programs or strategies to implement in their schools. School budgets are not limitless, and before any programs that promise to raise student achievement are purchased, school leaders should review multiple studies to determine whether the program will likely work with the students in that particular school or division.
Of course, not every instructional program costs money; sometimes we ask our teachers to implement specific strategies in their classrooms. Time is a valuable resource, however, and if we ask our teachers to implement less effective strategies, we’ve lost both the time we took for professional development and the class time in which teachers and students could have been engaged in a more effective strategy.
Thankfully, the problem is seldom a lack of research. There are plenty of studies out there that examine what actually works to raise student achievement, and with a careful, discerning eye, leaders CAN choose the strategies that are right for their schools and make a difference for their students. For a starting point, consider exploring the What Works Clearinghouse from the Institute for Education Sciences. It’s a great way to begin thinking about educational studies from an unbiased, data-based standpoint.
Interested in this topic? Read more here:
Hattie, J. (2009). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. New York: Routledge.
Watson, S (2008). “How fMRI works.” How Stuff Works.com. Retrieved from http://science.howstuffworks.com/fmri4.htm