"Universities Must Stop Outsourcing AI and Start Building It," Says George Siemens
December 29, 2025

IBL News | New York
Universities must stop outsourcing AI and start building it.
This is the thesis of George Siemens, a practical futurist and one of the most influential thinkers in digital education, who coined the theory of connectivism and helped launch the first MOOCs.
⢠āUniversities arenāt just slow, theyāre outsourcing their future. Institutions must stop renting tools that donāt reflect their values and start building AI infrastructure as a public good,ā he said.
⢠āWe need to treat AI infrastructure as core to our mission, not as something secondary.ā
⢠āBuilding AI-native infrastructure should be treated like building classrooms. It is critical public infrastructure.ā
As George Siemensāwho now leads Matter and Space at SNHU, an initiative focused on the future of human development in an AI-saturated worldāputs it:
⢠āAlmost no one is asking, in a sustained and serious way, what it means to design learning for a future where we donāt know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively.ā
In a Q&A with Allison Dulin Salisbury on Substack.com, George Siemens argued that higher education has been asking the wrong questions and stated that āthe most important learning outcomes arenāt being measured.ā
⢠āThe most stable and essential domain of learning will be human wellness. Well-being is not a support system nor an extracurricular. Itās the core.ā
⢠āIn an AI-saturated world, we return to foundational practicesāmindfulness, community, spiritual life, relationships, sleep, and diet. These are not new ideas, but they become newly urgent.ā
⢠āCenter wellness must be part of the learning journey.ā
⢠āAssessment is a particularly strong use case for AI.”
⢠āThe real shift is toward performance-based assessment that reflects how knowledge is applied in real contexts. With AI, we can create interactive environments where students analyze complex situations, construct arguments, build projects, and revise their work through dialogue. These forms of assessment allow us to capture the process of learning, not just the final product.ā
⢠āOne promising direction is the use of AI agents to support Socratic-style learning. The student engages in conversation with an AI that prompts deeper thinking, challenges assumptions, or simulates a real-world scenario. The interaction itself becomes a kind of assessment. Humans still play a central role in evaluating quality, but AI supports the process by making it more dynamic, personalized, and scalable.ā
⢠āAI can assess effectively when guided by well-defined rubrics, especially for formative feedback. But high-stakes decisions should remain in human hands.ā
⢠āBut AI lets us avoid the hardcoding behind these systems, making them more dynamic, personalized, and scalable.ā
⢠ā The LMS helps decide needed interventions or strategies ā helping us scale support. Learning designers must understand that an AI will engage with students, content, and curriculum, and present the right information to that AI to personalize student experiences.ā
⢠āIt feels like the real crux of the problem tonight is what we will need to know in the future, and how we will come to know it?”
⢠āToday, universities are fairly clear on what they want students to know within specific subject domains. And we have a reasonably stable answer to how students come to know it, largely through a faculty-centric, lecture-based model. That clarity is dissolving.ā
⢠āWe may be only a few years away from much of what is taught in higher education becoming obsolete, and as a sector we mostly just… carry on. The uncertainty is enormous. The ambiguity is uncomfortable. And yet the scale of the risk and the scale of our response feel wildly misaligned.ā
⢠āThereās something almost Becker-esque about this. In The Denial of Death, Becker argues that some realities are so overwhelming that we develop elaborate ways of not looking directly at them. I worry that the future of knowledge, skills, and learning institutions has become one of those topics. Itās too big, too ominous, too destabilizing to stare at head-on.ā
⢠āAlmost no one is asking, in a sustained and serious way, what it means to design learning for a future where we donāt know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively. And the fact that this isnāt front and center in higher education conversations baffles me. And honestly, it worries me.ā
⢠āThe anti-AI movement is on track to be one of the most powerful social movements weāve seen in our lifetimes. AI is a brilliant scapegoat for all sorts of issues because it will shape so many parts of life: you lose your job, have a dehumanizing customer service experience, see expenses rise ā you blame AI. It feels like it has agency, making it easy to hate.ā
⢠āAI can restructure the human experience to allow us greater connection to ourselves, our values, causes we want to participate in, and nature. It could let us feel the joy of being human more often, rather than laboring as machine extensions. This vision might feel naive, but itās not impossible. Itās very possible if we design for it.ā
Discover more
IBL News is funded by the New York-based, family-owned company ibl.ai. Our stories adhere to the highest ethical standards in journalism and are available to news syndication agencies.








