"Learning Management Systems' Analytics Model Is Inadequate"

Guest Post: Prof. Lorena A. Barba, Ph.D | 07.08.2015

 

This post originally ran on PyData on July 8, 2015

 

 

Data-driven Education and the Quantified Student


By Lorena Barba

Education has seen the rise of a new trend in the last few years: Learning Analytics. This talk will weave through the complex interacting issues and concerns involving learning analytics, at a high level. The goal is to whet the appetite and motivate reflection on how data scientists can work with educators and learning scientists in this swelling field.

Higher education has used analytics for a long time to guide administrative decisions. Universities are already adept at developing data-driven admissions strategies and increasingly they are using analytics in fund-raising. Learning analytics is a newer trend. Its core goal is to improve teaching, learning and student success through data. This is very appealing, but it’s also fraught with complex interactions among many concerns and with disciplinary gaps between the various players.

Faculty have always collected data on students’ performance on assessments and responses on surveys for the purposes of grading and complying with accreditation, sometimes also for improving teaching methods and more rarely for research on how students learn. To call it Learning Analytics, though, requires scale and some form of systemic effort.

Some early university efforts in analytics developed predictive models to identify at-risk first-year students, aiming to improve freshman retention (e.g., Purdue’s “Signals” project). Others built alert systems in support of student advising, with the goal of increasing graduation rates (e.g., Arizona State University’s “eAdvisor” system). Experts now segregate these efforts out of learning analytics, proper, because retention and graduation are not the same as learning. The goal, in that case, is to improve the function of the educational system, while learning analytics should be guided by educational research and be aimed at enhancing learning.

To elucidate what is learning analytics, it looks like we first need to answer: what is learning? What is knowledge? And can more data lead to better learning? That is perhaps the zeroth assumption of learning analytics—and it needs to be tested. There are assumptions behind any data system that go as far back as selecting what to track, where it will be tracked, how it will be collected, stored and delivered.

Most analytics is based on log data in the Learning Management System (LMS). This “learning in a box” model is inadequate, but the diverse ecosystem of apps and services used by faculty and students poses a huge interoperability problem. The billion-dollar education industry of LMS platforms, textbook publishers and testing companies all want a part in the prospect of “changing education” through analytics. They’re all marketing their dazzling dashboards in a worrying wave of ed-tech solutionism. Meanwhile, students’ every move gets tracked and logged, often without their knowledge or consent, adding ethical and legal issues of privacy for the quantified student.


Lorena A. Barba is an Associate Professor of Engineering and Applied Science at The George Washington University (GW) and PhD in Aeronautics from Caltech. She is also in charge of GW’s Engineering Open edX platform and Open edX Universities First Symposium, to be celebrated on November 11th 2015 in Washington DC. In addition, Lorena A. Barba is member of the Board of Directors of NumFOCUS and IBL’s top advisor.