"Universities Must Stop Outsourcing AI and Start Building It," Says George Siemens
Universities must stop outsourcing AI and start building it.
This is the thesis of George Siemens, a practical futurist and one of the most influential thinkers in digital education, who coined the theory of connectivism and helped launch the first MOOCs.
• “Universities aren’t just slow, they’re outsourcing their future. Institutions must stop renting tools that don’t reflect their values and start building AI infrastructure as a public good,” he said.
• “We need to treat AI infrastructure as core to our mission, not as something secondary.”
• “Building AI-native infrastructure should be treated like building classrooms. It is critical public infrastructure.”
As George Siemens—who now leads Matter and Space at SNHU, an initiative focused on the future of human development in an AI-saturated world—puts it:
• “Almost no one is asking, in a sustained and serious way, what it means to design learning for a future where we don’t know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively.”
In a Q&A with Allison Dulin Salisbury on Substack.com, George Siemens argued that higher education has been asking the wrong questions and stated that “the most important learning outcomes aren’t being measured.”
• “The most stable and essential domain of learning will be human wellness. Well-being is not a support system nor an extracurricular. It’s the core.”
• “In an AI-saturated world, we return to foundational practices—mindfulness, community, spiritual life, relationships, sleep, and diet. These are not new ideas, but they become newly urgent.”
• “Center wellness must be part of the learning journey.”
• “Assessment is a particularly strong use case for AI."
• “The real shift is toward performance-based assessment that reflects how knowledge is applied in real contexts. With AI, we can create interactive environments where students analyze complex situations, construct arguments, build projects, and revise their work through dialogue. These forms of assessment allow us to capture the process of learning, not just the final product.”
• “One promising direction is the use of AI agents to support Socratic-style learning. The student engages in conversation with an AI that prompts deeper thinking, challenges assumptions, or simulates a real-world scenario. The interaction itself becomes a kind of assessment. Humans still play a central role in evaluating quality, but AI supports the process by making it more dynamic, personalized, and scalable.”
• “AI can assess effectively when guided by well-defined rubrics, especially for formative feedback. But high-stakes decisions should remain in human hands.”
• “But AI lets us avoid the hardcoding behind these systems, making them more dynamic, personalized, and scalable.”
• “ The LMS helps decide needed interventions or strategies – helping us scale support. Learning designers must understand that an AI will engage with students, content, and curriculum, and present the right information to that AI to personalize student experiences.”
• “It feels like the real crux of the problem tonight is what we will need to know in the future, and how we will come to know it?"
• “Today, universities are fairly clear on what they want students to know within specific subject domains. And we have a reasonably stable answer to how students come to know it, largely through a faculty-centric, lecture-based model. That clarity is dissolving.”
• “We may be only a few years away from much of what is taught in higher education becoming obsolete, and as a sector we mostly just… carry on. The uncertainty is enormous. The ambiguity is uncomfortable. And yet the scale of the risk and the scale of our response feel wildly misaligned.”
• “There’s something almost Becker-esque about this. In The Denial of Death, Becker argues that some realities are so overwhelming that we develop elaborate ways of not looking directly at them. I worry that the future of knowledge, skills, and learning institutions has become one of those topics. It’s too big, too ominous, too destabilizing to stare at head-on.”
• “Almost no one is asking, in a sustained and serious way, what it means to design learning for a future where we don’t know what knowledge will remain stable, what skills will retain value, or how humans will continue to matter cognitively. And the fact that this isn’t front and center in higher education conversations baffles me. And honestly, it worries me.”
• “The anti-AI movement is on track to be one of the most powerful social movements we’ve seen in our lifetimes. AI is a brilliant scapegoat for all sorts of issues because it will shape so many parts of life: you lose your job, have a dehumanizing customer service experience, see expenses rise – you blame AI. It feels like it has agency, making it easy to hate.”
• “AI can restructure the human experience to allow us greater connection to ourselves, our values, causes we want to participate in, and nature. It could let us feel the joy of being human more often, rather than laboring as machine extensions. This vision might feel naive, but it’s not impossible. It’s very possible if we design for it.”