Skip to content

What Every School Leader Needs to Know about Educational Research – Part 1

So, your colleagues in the next school district over are raving about this great new program they have for helping “catch up” struggling students.  They say that you should really look into it for your school district.  The catch?  It’s pretty expensive.  But never fear, they say–it’s research-based.

There’s no doubt that the number of “research-based” programs that promise to raise student achievement grows every day in education—and yet, the amount of time that we have, as educators, remains ever-fixed. Just as importantly, the current economic situation limits us in terms of financial resources. For both of these reasons, it is more important than ever that we not only choose research-based programs to follow and strategies to implement–but the best research-based programs and strategies.

And it turns out…not all educational research is of the same quality. It’s been nearly two decades since the chief of the Child Development and Behavior department at the National Institutes of Health said to the U.S. Congress that educational research has a trustworthiness issue—and it’s still true today. As you’re checking out programs that pledge success for your students, there are some important tips to keep in mind.

First, let’s look at the concept of generalizability.

Dust off your old research methods textbooks, because you probably learned about this in a research methods class, where they called it “external validity,” and it looks at the extent to which we can we assume that the results of the study can be generalized across other populations.

It works like this: Suppose some researchers want to study an educational program or strategy, such as homework. In a perfect world, we want to know how homework impacts the achievement of all students, regardless of age, gender, socioeconomic status, culture, etc.. Obviously, due to the constraints of reality, running an experiment on the entire school-age population around the world can’t happen.

Instead, the researchers pick a certain population—let’s say, middle school social studies students in America. They still can’t study every middle schooler taking social studies in the entire country, so they choose a sample of students from the population. And many times, the sample that’s chosen has to do with convenience.  Most researchers don’t have the funding to be able to fly across the country at a whim, so they tend to use populations near them.

Of course, in the most pure of experimental studies, they’ll choose a population and pull a random sample of those students, and then the results get generalized to the entire population. The bigger the sample, the less likely it is that the researchers will happen to randomly choose mostly or all struggling students, mostly or all honors students, mostly or all the boys, etc..

Unfortunately, what you find a lot of times is that because educational studies aren’t always that well-funded, or because it can be hard to convince parents to sign off on allowing their students to be in studies, many studies end up with small sample sizes or whoever is available—also known as convenience samples.

That’s why it’s always important to check how many students were in any given study, and how they were chosen. The smaller the n-count, or the number of participants in the study, the less likely the results are to be generalizable to a larger population. Moreover, check to see which students were chosen, and if the researchers really worked to control for variables like socioeconomic status, age, parental level of education, IQ, or any other factors that can impact research results.

This is one reason why some researchers, like John Hattie, use meta-analyses. His 2009 book, Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement, has a plethora of these types of studies, which look at results from dozens to hundreds or more studies in order to determine which programs or strategies actually produce significant student achievement gains across several different content areas, grade levels, and demographics.

Let’s go back to our homework example for a moment. Overall, John Hattie found that homework has an overall effect size of .43 on student achievement, or a 17 percentile point gain over those that students that don’t use the strategy of homework. When you dig into Hattie’s research, however, you find that for elementary, it’s only a .15 effect size, or 6 percentile point gain—which some might not consider worth the time and effort. Then for middle it’s a .31 effect size, or 12 percentile point gain, which is better; and then in high school it’s a .64 effect size, or 24 percentile point gain. So, it’s not really that homework is effective with all students… it’s that homework is effective with a certain type of student.

This is one reason why many of the forms available in eObservations use strategies based on the work of John Hattie: we can be assured that through the use of the meta-analysis, the n-count for any given strategy is large and there are a variety of sample populations included. Moreover, eObservations has chosen the strategies that show strong generalizability across all grade levels and content areas, therefore having strong potential to positively impact student achievement when implemented with fidelity.

Interested in this topic? Read more here:

Cook, T.D. & Campbell, D.T. (1979). Quasi-Experimentation: Design & Analysis Issues for Field Settings. Chicago: Rand McNally College Publishing Company.

Hattie, J. (2009). Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. New York: Routledge.

Marzano, R.J. & Pickering, D.J. (2007). “The case for and against homework.” ASCD: Educational Leadership. Retrieved from http://www.ascd.org/publications/educational-leadership/mar07/vol64/num06/The-Case-For-and-Against-Homework.aspx

What Every School Leader Needs to Know about Educational Research

By Kate and James Maxlow

Leave a Reply

Your email address will not be published. Required fields are marked *