Without Flipped Learning Standards All We Can Do Is Agree to Disagree

Higher Ed May / May 11, 2018

– by Thomas Mennella –

 

When I was younger, I had an acute fear of flying.  It seemed every month there was horrible news on the TV about another tragic plane crash. I wondered why anyone would engage in this reckless and dangerous behavior. But then, as I became more informed, I realized that every single day thousands of planes, and hundreds of thousands of people, safely took off and landed all across the world. That struck me as unfair. If the news was going to cover the crashes so dramatically when they occurred, wouldn’t it be right to at least announce each day, at the end of the news, how many planes flew safely without incident? Why do the failures make big news, while the successes are seen as boring?

Each month, dozens of papers are published in academic education journals that show, time and again, that Flipped Learning (FL) is superior to traditional instruction. Most of these publications are seen by few and reported by none. But, in the rare case where a study is published that claims to refute this prevailing consensus and it shows that Flipped Learning does not work, that paper is promoted in educational digests, newsletters, and social media. Not fair.

Such was the case earlier this month when the digest of a paper claiming that Flipped Learning did not work in a graduate-level epidemiology course landed in my email inbox (Shiau, et al., 2018).  Here is the headline and synopsis that was sent out by Campus Technology, an online higher education magazine:

Study Finds Flipped Classroom Model Does Not Improve Grades in Health

Science Course:  A study at Columbia University’s Mailman School of Public Health found that in a health science course following the flipped classroom model, there was no statistically significant difference in test scores or students’ assessments of their course, compared to a traditional lecture course.

It’s not always wise to let newsletters and digests pre-chew your thinking for you. Sometimes, it’s best to go to the source and decide for yourself. That’s exactly what I did, and what I found surprised me.

In this study, two sections of a master’s level introductory course in epidemiology from subsequent years were compared. The first year was taught traditionally and the second year was flipped. Indeed, there were no statistically significant gains in exam performance between years, but that’s only part of the story.

First, the authors do not explicitly describe what was done with the students in the group space. FL3.0 has found –and emphasizes — that the group space work students do in the classroom is what makes the difference in FL. Two instructors can craft the exact same videos and use the same FL formula, but the instructor with better, more dynamic, and more effective group space activities will have the better student outcomes every time. By not knowing what the authors did with their students in the group space, we cannot judge for ourselves how effective FL should have been in this context.

Second, the traditional offering of the course was not so traditional. The authors state that part of the traditional course was that students worked on a semester-long group project, partly completed during class time. This sounds a great deal like project-based learning (PBL), one of the “apps” that is recommended for use with the FL operating system. One major benefit of FL is that it frees group space time for strategies like PBL. If the authors of this study were doing PBL already, then we would expect the adoption of FL to be less impactful. PBL is not traditional instruction. 

Third, the authors tell us that the composition of the class changed dramatically in the two years that the study was completed. More students from biostatistics programs were enrolled in the flipped section, and fewer students were in that section from other departments. What’s more, no effort was made to grade-match students or normalize student performance for GPA. It is possible that the traditional section had stronger students enrolled and therefore performed at a higher level than those students in the flipped cohort. In other words, it remains possible that the students in the flipped course did better than they would have traditionally. Said simply, there is no valid control or comparison point in this study. As the authors themselves state, “students from different departments often have different skill sets, learning styles, and academic goals.”

Fourth, while it’s true that student performance did not improve in the flipped cohort, the mean on the midterm and final in the traditional group was 91.4 and 90.5, respectively. Those are incredibly high exam averages, and they leave little room for improvement. Among the flipped students, those same averages were 93.4 and 91.2. Imagine if you had developed a fuel that allowed cars to drive much faster and you gave me that fuel to test. If I put that fuel in a race car that typically hit 140 mph and reported back to you that, no, your fuel had no effect, you’d rightfully feel cheated. Of course, your fuel would not be likely to make a race car go faster; race cars already go really, really fast. You’d want your fuel tested on a car that struggled to hit 80 mph; that’s where the real effects could be seen. FL has been shown to most benefit those students who struggle in traditional academic environments. None of those students appear to be enrolled in this course; they are largely high achievers. This leaves little room for improvement, either by the use of FL or any other pedagogical change.

And, finally, in student evaluations, students in the flipped group actually did report enjoying and seeing the benefits of components that FL allowed, such as having videos to return to and completing just-in-time assignments associated with the videos. So, even if FL truly had no effect on student learning in this course, aren’t happier and more satisfied students a good thing, too?

I believe that Shiau, et al. conducted the best study they could given the limitations they faced. And, I think they interpreted their results as fairly as possible. But neither of those considerations changes the fact that two different student groups were being compared to one another (a critical experimental confound) and that the output being measured (exam performance) was already at a near maximum prior to the intervention with FL, leaving little to no room for further improvement.  

It’s OK to report studies that refute the effectiveness of FL, but let’s hold editors to a high standard of assessing the veracity of those studies before promoting them. And, if we’re going to make news with such studies, let’s give equal time and billing to all of the studies that support the effectiveness of FL. Many of those studies have been performed to exceptional experimental standards, and they tell a really interesting story: that FL works.  

Reference

Shiau, S., Kahn, L. G., Platt, J., Li, C., Guzman, J. T., Kornhauser, Z. G., … & Martins, S. S. (2018). Evaluation of a flipped classroom approach to learning introductory epidemiology. BMC medical education, 18(1), 63.






Thomas Mennella
Dr. Thomas Mennella Mennella
I have been an instructor in higher education for over ten years. Starting as a lecturer at the University of Massachusetts – Amherst, and then moving on to an Assistant Professorship at Delaware State University (DSU), a small public university, I experimented with Problem-Based Learning (PBL) and was an early-adopter of the iClicker student response system. Now an Associate Professor at Bay Path University, a private liberal arts institution in western Massachusetts, I primarily teach Genetics, Cell and Molecular Biology. I am Flipped Learning 3.0 Level -II Certified and a founding member of the FLGI International Faculty.




Previous Post

Do This Now, Next School Year Will Be Better

Next Post

Five Steps That Lead to Greater Success with Flipped Learning





0 Comment


Leave a Reply

Your email address will not be published. Required fields are marked *