AI, Humans & the Future of Education: an Interview

The most important opportunity AI presents: The opportunity to recognise that we made AI, it didn’t make us. It learned from us, not the other way around. It’s trying to imitate us, to be more like us, because what we’ve got going on is pretty remarkable.

AI, Humans & the Future of Education: an Interview

I was recently interviewed for a podcast produced by the learning and community platform, Disco.


With your extensive background in learning development and education technology, how do you envision the future of online education evolving in the next year? What about the next five years?

Put simply, the future of online education will require new pedagogies to foster community-driven learning.

Three years after the pandemic upended education, we still find ourselves sifting through its debris. If the sudden pivot to online learning was an earthquake, its aftershocks continue to ripple, reshaping classrooms and institutions alike. The pandemic didn’t just reveal the cracks in our education system; it widened them. The problem wasn’t simply that we weren’t prepared—it was that our foundation was built on the wrong priorities. Efficiency. Compliance. The relentless march toward standardization.

Before the world shifted online, digital education was already an uneasy compromise. It mimicked the physical classroom, trying to replicate lectures and discussions in digital form, all while ignoring the essential differences between learning in a room full of peers and learning in isolation. And when the pivot came, that model wasn’t just scaled—it was forced upon teachers and students alike. Attendance sheets became Zoom logs. Tests became surveilled performances under the cold gaze of proctoring software. Teachers were handed tools designed to enforce order rather than cultivate connection, and students were expected to thrive in systems that felt, at best, indifferent to the reality of their lives.

Today, not a lot has changed. We got out of the pandemic, and we moved on; and not a lot of attention was paid to what went wrong with online learning. It was easier to blame Covid. What is different now, though, is that students—and to an extent, teachers—have learned that learning can be done remotely, in a way that lets them balance the rest of their lives. 

I think we’ll see more and more students learning online. And to accommodate this pedagogically, we’re going to need to adapt the way we teach. Because learning is a social activity, we need to find ways to build communities online, and we need to learn to trust community-driven learning—by which I mean an approach to learning that values and fosters collaborative intelligence, peer support, and emergent outcomes that reflect the production of knowledge, and not just its transfer.

We’ll of course need platforms that support that kind of learning and teaching. With the right platforms, and with the right professional development for teachers, I think we can get away from the efficiency model of online learning and move toward something that’s truly productive, engaging, and that grows students’ agency and confidence.

How do you see the role of educators changing as AI becomes more prevalent in online learning platforms?

Let me start off by saying that what’s become more and more important to me in the last year or so is resisting the dominant conversation that insists that AI adoption is or should be compulsory. It’s a cool tool, and I admit that it’s fun to watch it do its thing. But does everyone need to adopt it?

The whole conversation about AI being compulsory is largely fuelled by AI companies and their evangelists, and not by teachers who are actively dealing with this technology’s presence in the classroom. The truth is, I don’t really see the role of educators changing much because of AI. This isn’t unlike when laptops and cellphones showed up in classrooms, or when people found ways to teach using social media.

There are uses for technology. Pedagogical uses. Ways to make technology a platform for human discovery and knowledge production. 

But the other side of that argument is that educational technology is designed with some very particular, and usually rather robotic, ideas about how learning happens; and so when it comes into the classroom, it tends to demand that pedagogies change to fit the machine. What we need are more teachers participating in the production of educational technology.

It’s super important to me that teachers remember that they have agency when it comes to technology—whether that’s social media, the learning management system, or AI. If I hope to see any change in the role of educators as a response to AI, it would be for them to become more confident in their pedagogies, clearer in their relationships with students, and more and more trusting of the humans they teach rather than the algorithms through which that teaching can sometimes be translated.

How do you see the way learners consume education changing in the next year?

Hmm. That word ‘consume’ is pretty problematic, isn’t it? But maybe that’s exactly the word we need to work with. Because AI is definitely making education more consumable.

Jeppa Stricker at Aalborg University in Denmark recently observed on his Substack a number of “silent” problems associated with AI adoption. And I think they’re actually problems related to a consumption model. He said, for example, that students are starting to unconsciously adopt AI speech patterns—something he called intellectual mirroring. We do this with humans all the time, of course—picking up quirks of an accent, for example, when visiting a foreign country or the American South. But picking this up from a non-human influence is tricky. It’s an unintended outcome of learning from AI.

Similarly, Stricker says that students are experiencing the illusion of mastery—thinking they understand something because AI explained it. And we’re also seeing collaborative intelligence decay, where students—and I think this applies in professional settings, too—are abandoning human brainstorming and group work because it’s faster and more agreeable to work with AI. 

See here’s the thing: humans like authoritative voices. We like it when people seem confident about their truth. AI appears pretty confident. And it's often more more agreeable and less critical than teachers can be. A recent study demonstrated that students in a physics class had better luck with active learning when AI facilitated that learning than when their teacher did. It’s simply easier to get along with AI, ask questions of it, and learn from it.

But what can’t it do? It can’t give students their own confidence. When I was a teacher, my goal was to convince every student that they could be an expert. And they usually needed a fair bit of convincing; but in the course of a semester, they would discover their voices, their own authority and genius, and that made an impact. AI can’t do that for them. So, I hope that students who are currently consuming education will realise that it’s better to contribute than to consume.

What are the key challenges and opportunities you foresee in 2025 with the integration of AI for both the learner and the administrator of online learning?

I think I’ve covered some of the challenges, so let me take a second and focus on opportunities. I wouldn’t want you to think I’m totally pessimistic. 

I always look at technological advances as platforms for us to focus again on what makes us human. I think technology can sometimes feel like a reason to purge all that we used to be in favour of the new thing that’s now in front of us; but I think we sacrifice a much deeper understanding of our humanity when we do that. This grass may be greener, but we grew up on the other grass. It made us strong, agile. But it also fed our imagination and our curiosity—which drove us to the new grass, right?

Maybe that’s the most important opportunity AI presents. The opportunity to recognise that we made AI, it didn’t make us. It learned from us, not the other way around. It’s trying to imitate us, to be more like us, because what we’ve got going on is pretty remarkable. The biggest opportunity in front of us is not to say “oh, this! This amazing thing!” But to say “okay, now what?” Because curiosity and asking questions has driven the evolution of the human mind for millennia, and there’s no reason to stop with AI. 

What’s weird, and what’s probably the biggest challenge that learners and administrators will face with AI, is that AI purports to answer our questions, to solve our curiosities. We have to make sure that the way it’s implemented or adopted leaves plenty of room for continuing to ask questions, for continuing to explore, to work together to pose and solve problems. It’s that community-driven learning I was talking about.

AI, more than any other technology before it, looks like a solution—not to problem we have, but to the questions we have, the arithmetic of human curiosity and imagination. How we put it into our schools or businesses should complicate rather than simplify our engagement with knowledge. 

How can learning administrators/facilitators balance the use of AI tools with the need to maintain authentic human connections? 

So here I do want to point out that there’s an underlying assumption in the question: that AI is compulsory. Or at least that AI is going to be a factor in facilitation, is going to supplement or interrupt human connection. I just want to double down on the fact that AI isn’t always there. It doesn’t need to be the devil or angel on our shoulder, advising us on how we engage with our lives, with learning and teaching, or with the world.

That said, when AI tools are present and when we have decided to include them in the learning process, I think we need to give them the appropriate seat in the room. What do I mean by that. I’m sure you’ve heard of the shorthand teaching philosophies “sage on the stage” and “guide on the side.” As inelegant a job as these do to describe the role of the teacher, they cleverly situate the teacher in relationship to students. Is the teacher dispensing wisdom from in front of the room? Or are they hanging out nearby, and letting students drive the learning? 

If I were going to situate AI in the classroom in a similar way, I’d probably put it in a student’s pocket. With AI apps, that can be very literal; but I mean it figuratively to some extent. I wouldn’t put AI on the desk, I wouldn’t put it on the side or on the stage, either. Certainly no position that a teacher might occupy; but also not superimposed on the student either.

Learning has been a social activity since before we had language. We’ve always learned from each other. That shouldn’t change. AI can be a pretty nifty helper, and can get us from A to C when we’re stuck at B. But it shouldn’t stand in for a human being when a human being is available.

What role does AI play in actually supporting those human connections?

I really hate the phrase “hot take,” because it’s such a main character thing to say, but I guess I have hot take?

I don’t think AI plays a role in supporting human connection. Period.

In fact, I think AI has contributed to the epidemic of loneliness we’re increasingly facing as human beings. We are now more isolated from each other than ever, and maybe I lack imagination on this point, but I don’t see how talking to a non-human facsimile can help me connect to another human being.

We’ve plenty of evidence that algorithms have hurt human community by abbreviating it on social media, by making it more performative there, and by offering us the illusion of connection through dopamine-fueling likes and follows. Those connections, though, are profoundly tenuous.

What we need are platforms that get out of the way, that emphasise community and connection rather than trying to mimic it through algorithms. That let humans be humans with each other.

Human connection is messy, aggravating, intimidating, complicated, glorious, beautiful, unique, and entirely unpredictable. Who we connect with, how we find colleagues and like-minds, the places where we encounter love and support—these are all too dynamic for AI. AI doesn’t connect with anyone. It can’t. Because the essence of human connection is something non-algorithmic, non-binary, and something we’re still trying to understand. Social media don’t get it right. Dating apps don’t get it right. It’s something that happens between humans.

I’ve always advocated for educators to teach through the screen, not to the screen. We need to look past the platforms that mediate learning and teaching in order to establish connection human-to-human, because everything else is just code.

How has AI has been effectively used to personalize learning experiences?

Ah, the perennial question. Always makes me want to ask, when did learning become impersonal?

I absolutely love the idea that learning paths could be so deeply customised as to be personalised. That a student could, potentially with the support of an advanced AI, learn not only at their own pace, but driven by the connections they make, rather than those dictated by the curriculum. Because I’m a proponent of Freire’s idea that students should be the subjects of the education process, and not its objects, the idea of a student being that much in the drivers’ seat is pretty appealing.

I don’t think we’re there yet, though. One reason is that technology isn’t generally made by teachers, people who really understand how learning happens—outside of best practices, outside of learning science—that complicated, surprising process that so often just feels like luck. Another reason is that students come to learning with such a diversity of experience, knowledge, ability, identity, access, psychology, spirituality—that, just as no one teaching method can encompass them all, so no AI can either. 

It’s important to remember, AI is us. It’s us diluted, reduced to the trails of knowledge we’ve left online; and so it won’t ever be capable of more than we are. 

To be honest, I think the solution to personalised learning is community-driven learning. I know I keep bringing this up, but in a community where knowledge production is shared, every individual has the opportunity to carve their own path through learning, and to contribute their discovery to the community. I don’t think we need technology to do this for us—unless, of course, that technology is a platform that facilitates community-driven learning.

As AI becomes more and more prominent in the online learning space, what ethical considerations should everyone keep in mind when incorporating AI in their curricula?

Yeah, ethical considerations are incredibly important when it comes to AI. There are the concerns about implicit bias, like those that Joy Buolamwini of MIT has been talking about for years. There are significant and mostly under-discussed environmental repercussions. And of course there are concerns among teachers and universities about academic honesty and where the lines are, now that there’s an author in play that can’t claim intellectual property.

But rather than talk about those, I want to go back to Jeppe Stricker here. Not what he says on his blog, but what I see underlying his observations: the role that nurture plays in education. And I’m reminded of the early unpleasant psychological experiments with rhesus monkeys who were deprived their mother. I don’t know if you recall these, but in short, Harry Harlow’s experiment demonstrated that infant rhesus monkeys formed stronger attachments to a soft, cloth-covered surrogate-mother, that provided comfort, over a wire surrogate that only provided food. The study revealed that emotional security and tactile comfort are more critical to attachment than basic needs like food.

Now let’s say you’re an online student faced with the cold, impersonal design of an LMS, disconnected from a campus community, just one of any number of students your teacher is grading, and just about everything you understand to be “school” is on a screen. Along comes an AI that will encourage you, congratulate you, affirm your ideas, speak in a soothing voice. In essence, you’re faced with a wire monkey dispensing grades—which I see as the technology, not the teacher, to be clear—and on the other side a cloth monkey dispensing knowledge in the most agreeable way possible.

Where are you going to invest your time, your energy, your sense of what school is? Now, maybe this is a good thing. Maybe letting an AI take on the more personal and emotional aspects of learning online could help students stay in school, or feel better about their place in college. On the other hand, is a surrogate intelligence the answer when human intelligence is available?

What friction are you seeing right now in terms of AI adoption in the education space? How can learning administrators overcome this friction?

Most of the friction I see happening is between advocate/evangelists and those who resist AI. I don’t mean to harp on a theme here, but the conversation about compulsory AI is divisive, especially in that it privileges the agency of artificial intelligence over the agency of humans. If you listen to the dialogue, AI is changing the future—or already has—not humans. Some folks feel great about their relationship to AI, feel they’ve found an assistant, a companion, a TA, a tutor; but others don’t feel the same way.

We need to remember that AI has no agency, and that the illusion of its agency is actually the power of tech companies to influence the narratives we read in news outlets. 

I think if administrators want to overcome the friction in their departments, they should sit down without AI, and engage the people in the room. I think they should ask questions about what education is, what it should be, what the future of education looks like to those who are teaching toward it. Get back some of that collaborative intelligence, steal back some of the narrative about the future. Paulo Freire wrote again and again that the future is possibility, and anytime someone tells you it’s  already written, they’ve got something to sell.