Are You Asking the Right Questions About Flipped Learning?

Higher Ed April / The Community April / April 1, 2018

– Thomas Mennella –

 

My graduate advisor gave me more pieces of good advice than I could ever count.  But one of the first and best that he passed on to me was this: “If ever you need an idea but can’t come up with one — if ever you need to get better but can’t figure out how — go to the research. Read a few papers, flip through a few journals, and you’ll be inspired.” He was right, as usual, and this advice has served me well for much more than a decade. Why reinvent the wheel when there is a paper published out there describing how to build the best wheel possible?  But, there is a catch. There is good research and not-so-good research. Looking for inspiration from the former can lead you to greatness while using the latter will often only send you down blind alleys. Here, we examine the evolution of research into Flipped Learning, take the pulse of where it stands now, and discuss how to recognize the best current research so that it can be adopted and leveraged for greatness.

Good research begins with the word does.  But, great research advances with the word why. When fields fully mature, the research into them focuses on the most powerful word: how.

Does it rain only when the clouds are full of water?  This single question likely launched the first rudimentary scientific inquiries into rain. Prior to this, different cultures believed in bringing the rains through dance, sacrifice, or prayer.  But, the questioning of “does” led to the first breakthroughs of knowledge in the field of rain, as it has for all fields.  

However, all of this early knowledge would have stalled had this early research not evolved into “why.”  Why does it rain only when the clouds are full of water? This research question and the answers it led to undoubtedly drove early scientists to begin to observe the details of rain, trying to understand the water cycle.

Then, a full understanding of rain culminated with the word “how”: How does it rain only when the clouds are full of water? This final question led directly to an understanding of evaporation, saturation, precipitation, condensation, and much more.  Research begins with “does,” it refines with “why” and it evolves with “how.”

As Flipped Learning gained traction in K-12 and higher education in the late 2000s, research into its effectiveness began as all good research does: with the question, “Does Flipped Learning work?” Many early studies were well-controlled comparisons between identical, parallel sections of flipped and traditional classes and most of them demonstrated that, yes, Flipped Learning was superior to lecture-based instruction (especially when it was implemented correctly with well-trained practitioners). Such studies, which explored the efficacy of the model where students watch videos at home and do “homework” in class, can be classified as Flipped Learning 1.0 (FL1.0) research. These studies were very useful at the time and were needed to lay the groundwork for Flipped Learning research, but now such research is far less useful. As a general rule, when looking for ideas and innovations, FL1.0 research is the least helpful.  For an informative illustration of the evolution of Flipped Learning and the differences between FL1.0, FL2.0 and FL3.0, please see infographic here.  

But, like all good research, studies into Flipped Learning evolved, and some began to ask, “Why does Flipped Learning work?”  This involved studies that dug deeper into student satisfaction and perceptions in a flipped setting. They looked at unexpected outcomes in flipped classes and student gains in non-academic areas, such as responsibility and self-confidence.  This research represents the evolution towards FL2.0. FL2.0 research is far more useful than FL1.0. Here, you may find ideas for group space activities, student engagement, etc. These ideas will not be best practices, and they will possibly be dated.  But, if you are an FL1.0 instructor, adopting the ideas shared in this research could still potentially improve your overall instruction dramatically.

Then, of course, this research continued to evolve, as all good research should, and began to explore innovative and unexpected applications. Studies were done that measured the effectiveness of different combined group space strategies, documenting student gains when multiple group space strategies were used in combination, and it also looked at unexpected factors such as classroom layout and furniture. These studies represent the advancement of Flipped Learning research to the 3.0 level. FL3.0 research is a rich treasure trove of ideas just waiting to be adopted. Good FL3.0 research often represents the best, state-of-the-art practices in Flipped Learning.

Moving on to measuring the pulse of Flipped Learning research, one true test of the health of a field is the level of maturation of its research. In other words, how much research is focusing on the “does,” how much measures the “why,” and what research is exploring the edge of a field’s envelope by asking “how?”  With Flipped Learning, as a movement well into its second decade, we would expect to see most of the current research focusing on 2.0 and 3.0 ideas. Is that the case?

A simple query in Google Scholar (using the term “Flipped Learning”) helped to address this question. The results were restricted to only journal publications from 2018, representing approximately eleven weeks of publications.  Out of the over 100 articles returned by this search, only papers in English were reviewed (because this author is monolingual) with that review consisting of reading the abstract of the paper and/or perusing its content. Fifty-seven papers met these criteria and were considered. Papers that focused on “does” (i.e., does Flipped Learning work in a particular setting?) were considered to be FL1.0 studies. Those that measured the “why” (i.e., why does Flipped Learning work?) by noting unforeseen outcomes of Flipped Learning, exploring novel applications of the approach, or coupling Flipped Learning with other strategies or assessments were classified as FL2.0 studies. Finally, papers that asked “how” (i.e., how does Flipped Learning work?), pushing the envelope of Flipped Learning (with examples provided below), fell into the 3.0 category.

This meta-analysis is admittedly somewhat subjective.  However, given the fairly robust sample size (57 papers reviewed), significant misclassifications are expected to be rare. According to the criteria established above, 53% of the papers published in the first two and a half months of 2018 fit into the 1.0 category, 30% related to FL2.0, and only 17% were representative of FL3.0. 

One driving factor for this apparent preference for conducting FL1.0 research may be its ease and convenience.  All instructors, whether they be from K-12, or higher ed, receive accolades and appreciation for published work.  The publication of a manuscript in a peer-reviewed journal is the gold standard for scholarly excellence. Imagine, if we return to the example above, that I could get a paper published for answering the question, “Does it rain when the clouds are full of water in western Massachusetts?” and you could get a paper published for answering the question, “Does it rain when the clouds are full of water in your hometown?” There would be an incentive for each of us to publish papers, and receive accolades and appreciation, for answering this question specific to thousands of individual locations.  But essentially we are all answering the same single question thousands of times, and doing so is not good research. It stalls advancement, and it halts the evolution of our field. The clouds rain when they are full of water, period, and we need not prove it any longer. Flipped Learning works, period. Yet, we continue to publish niche, FL 1.0 studies proving what has already been proven. And, we do so at the peril of stalling the advancement of this wonderful pedagogy. We must avoid the unconscious preference for conducting FL1.0 studies and instead be inspired by wonderful FL3.0 studies, such as:

  • leveraging Flipped Learning within a school-wide framework of educational technology implementation; 1
  • the impacts of institution-wide adoption of Flipped Learning; 2
  • comparisons between augmented Flipped Learning and standard Flipped Learning 3; or
  • the interplay between game-based learning and Flipped Learning (from late 2017) 4

By looking for, leveraging, and adopting FL3.0 publications, you can reach a level of exceptional teaching that perhaps you never even thought was possible. When exploring the Flipped Learning body of research looking for inspiration, ask yourself: is this paper asking “does,” “why,” or “how?” Spend your time on papers of the latter two; spend your time with FL2.0 and FL3.0 research. Yes, it rains when the clouds are full of water, and we even understand why. And, yes, Flipped Learning works, and we understand why that is, as well. It is now time to ask the next questions. It’s time to push our limits of knowledge and innovation. It’s time to evolve to the next level.

1 Lo, C. K. Grounding the flipped classroom approach in the foundations of educational technology. Educational Technology Research and Development, 1-19.

2 Lee, M. K. Flipped classroom as an alternative future class model?: implications of South Korea’s social experiment. Educational Technology Research and Development, 1-21.

3 Wang, J., Jou, M., Lv, Y., & Huang, C. C. (2018). An investigation on teaching performances of model-based flipping classroom for physics supported by modern teaching technologies. Computers in Human Behavior, 84, 36-48.

4 Hattingh, M. J., & Eybers, S. (2017, September). Towards Understanding How Game-Based Learning Can Enhance Flipped Learning. In International Symposium on Emerging Technologies for Education (pp. 106-115). Springer, Cham.

 






Thomas Mennella
Dr. Thomas Mennella Mennella
I have been an instructor in higher education for over ten years. Starting as a lecturer at the University of Massachusetts – Amherst, and then moving on to an Assistant Professorship at Delaware State University (DSU), a small public university, I experimented with Problem-Based Learning (PBL) and was an early-adopter of the iClicker student response system. Now an Associate Professor at Bay Path University, a private liberal arts institution in western Massachusetts, I primarily teach Genetics, Cell and Molecular Biology. I am Flipped Learning 3.0 Level -II Certified and a founding member of the FLGI International Faculty.




Previous Post

Top 10 Must Read Research Papers

Next Post

A Sit Down With The Godfather of Flipped Learning Eric Mazur: PART - I





2 Comments

Avatar
on April 12, 2018

Thomas, I really enjoyed the way that you approached the distinctions via the different levels of questions and your analogies were spot on. I think this is something that we must continue to push with for with our students and with ourselves; asking better questions. You did a great job interweaving stories and data. Well done!

    Avatar
    on April 14, 2018

    Thank you, Jon. Yes, asking the better questions will lead to better answers, and better answers will evolve us to FL4.0 (and whatever that might be). So exciting! Thanks for the feedback.



Leave a Reply to Jon Harper Cancel reply

Your email address will not be published. Required fields are marked *