skipToContent
United StatesHE higher-ed

Universities must steer AI in the public interest

University Affairs CA United States
Universities must steer AI in the public interest
In my first month as a university provost, a colleague said something that has stayed with me: artificial intelligence isn’t a technology story, it’s a humanity story. From that day, the power of such a distinction has only amplified. It reframes AI not as a tool to be optimized, but as a force that will shape how we think, work, learn and understand one another. To readers of this article, it exposes a consequence we can no longer avoid: universities must play a central role in how society responds. At Toronto Metropolitan University (TMU), as at many institutions, we’ve been approaching AI in familiar ways: through strategies, task forces, committees, conferences and policy frameworks designed to put guardrails around its use. While necessary, such efforts are not sufficient in meeting the deepest challenge higher education has ever faced — the scale of ethical, social and human implications stemming from AI’s large language models. AI is fast-moving, opaque, and full of unknowns In the past, universities have navigated industrial revolutions, cultural upheavals, new media, shifting social norms and evolving government priorities. But AI is fast-moving, opaque and filled with unknowns — it is disruption in the dark. Its impacts are unfolding faster than our collective ability to understand them, let alone govern them. It’s a challenge for all institutions, and that’s why I’m calling for collective leadership across Canadian universities to come together and work on AI in the public interest. What makes universities so necessary in this moment is their diversity of perspectives — engineers, historians, sociologists, philosophers, psychologists, artists, scientists, legal scholars and technologists all bring essential lenses, questions, and insights to the AI conversation. Together, they can illuminate connections and consequences that no single discipline — or corporation — can see on its own. This work goes far beyond questions of pedagogy or academic integrity. It is about safeguarding human agency, dignity and judgment in an age of automation. Protecting integrity and equity At TMU, we’ve started to meet this responsibility by insisting on a human, ethically grounded approach to AI’s use and study. This means prioritizing AI literacy so that students engage with these technologies critically and responsibly. It means establishing clear avenues for equitable access, ensuring that AI tools are available and understandable to all students, not only those with the means to seek them out. It means designing assessments and learning experiences that resist the shortcuts AI can provide while leveraging what it can offer academia: helping students tackle complex problems, develop into expert learners, and connect their education to the realities of the world into which they will graduate. And it means holding ourselves accountable to lead with principles that protect the integrity of learning and keep human relationships at the centre of education. Inside the academic environment, we, of course, cannot adopt AI in ways that erode integrity, reinforce bias or substitute automated outputs for genuine intellectual engagement. We must not just ask what can AI do, but what it should do in this context, for these learners, toward these ends. These are questions of ethics, and they are being considered across most facets of society. It’s up to universities to lead with answers. Tech corporations are inadequate to ethical deliberation What makes this moment especially urgent is that the creators of AI will not, and cannot, answer the hardest questions it raises. These systems are being built by profit-driven companies whose incentives reward speed, scale and market dominance and not reflection, restraint or ethical deliberation. I want to be careful here: this is not a claim that companies bear no ethical responsibilities for the technologies they create. They do. But the structural incentives of the market make sustained ethical deliberation genuinely difficult to sustain from within. That is not a moral indictment; it is simply how markets work. It also means that the responsibility for grappling with AI’s broader consequences cannot rest with industry alone. It is also worth acknowledging an uncomfortable truth: many of the foundational AI systems now being commercialized were initially developed in university research labs. The question of what responsibility those researchers held, and whether the field moved too quickly without sufficient ethical reflection, is one we must take seriously. It is another reason why the university’s role now is not to just critique AI from the outside, but to model what responsible development and deployment actually looks like. Universities can offer a commitment to transparency, and a culture that rewards critique alongside creation. The should versus can idea. Move slow and refrain from breaking things The questions before us are vast and unsettling. Will AI flatten original thought, or redefine creativity altogether? What happens when large language models remix existing, and often contradictory, knowledge at scale? Will future workers be valued primarily as inputs in optimization systems? How do we teach judgment, curiosity and moral reasoning in a world increasingly mediated by algorithms? How do we confront bias when it is embedded in code and data rather than individual intent? These are not engineering problems. They are human ones. They demand patient analysis, deep debate and insights drawn from across disciplines. Apart from training future workers and transmitting skills, the deeper mission of universities is to study, interpret and critique the forces shaping society. When those forces threaten to outpace our values, it’s our civic obligation to respond. The technology sector is famous for the mantra “move fast and break things.” Universities exist for the opposite reason: to slow down, to question and to ensure that what we build actually serves society. With AI, the stakes are too high to do otherwise. Keeping humanity in the foreground is not optional, it is the work of our time. The post Universities must steer AI in the public interest appeared first on University Affairs .
Share
Original story
Continue reading at University Affairs CA
www.universityaffairs.ca
Read full article

Summary generated from the RSS feed of University Affairs CA. All article rights belong to the original publisher. Click through to read the full piece on www.universityaffairs.ca.