Hitting Backspace: Critical Digital Pedagogy and AI
AI only works because we have let human life become automatable.
Critical digital pedagogy has never been about digital technology in schools. It has always been about human beings in schools where digital technology inflects learning. It’s about what technology does to pedagogy—not how pedagogy is exercised through technology—and about how pedagogy can, or must, respond.
I believe that every new innovation in educational technology is but another opportunity for us to consider what it means to learn as and teach as—and be—human.
Today, everything we do is inflected by digital technology, and all teaching and learning is necessarily digital to one extent or another. But that doesn’t mean that the domain of teaching and learning is digital—teaching and learning remain entirely human endeavours.
This is why when I advise any faculty who teach, or are planning to teach, online, my first bit of advice is to remind them that neither they nor the students they teach are changed just because there’s a screen between them. We teach through the screen and not to the screen. And I’ve written elsewhere:
Stop thinking about being online. No learning happens online. It all happens in a real place somewhere, where there are hands and fingers, feet and toes, a breathing person with a heartbeat whose eyes blink more slowly when they think hard.
This idea of the real—real places, real hands and fingers, a breathing person—is exactly at the heart of critical digital pedagogy. It's why the journal Hybrid Pedagogy is called that, and why Digital Pedagogy Lab was designed as an in-person event. Because this work has always been about humans, not technology.
Technology—whether that means new technologies, the presence of technologies, the engine of private data upon which technology runs, or access to the technology of the day—can change or influence the circumstances under which students endeavour to learn. From the phone in their pocket to their access to broadband and wifi, to generative AI, the ways in which students learn can change as quickly as tech companies introduce new platforms and features.
It’s actually been a very long time since teachers had unique control over the way students study and learn; that control is now shared, and in some cases dominated, by technology companies. Learning has become revenue, knowing and understanding are profitable. Teachers can either work to influence how tech companies develop their products, or they can become as informed as possible about how those products change students’ circumstances, and teach with those circumstances in mind.
This was true at the advent of the LMS, which codified Bloom’s Taxonomy and plotted the course for online teaching for decades; and it is true today, with the dawning of generative AI. GenAI is the latest technology foisted upon education, and our response to it—if that response is founded on critical digital pedagogy—must be to inspect and become informed about the ways that it can, does, should or shouldn’t change the circumstances under which learning will occur… and then to adapt our teaching.
But to be very clear: I am not proposing that we adapt our teaching to generative AI, but rather because of generative AI, with the anticipation that it will change students’ circumstances.
For example, generative AI may exacerbate the digital divide, and the culture of haves and have-nots. Not all platforms are free. Take for instance, the paid version of ChatGPT which is both more flexible and more accurate than its free counterpart. But aside from the cost of the latest models, there’s also the divide of those who have access to these platforms and those who do not. A student with access to AI may be able to research and write faster, practice language skills more regularly, get advice and help that other students cannot get.
Generative AI may also change the way creation happens, and the way students think about themselves as authors of their own work. As the debate rages on about whether output from large language models constitutes plagiarism, students are using these platforms to change the way they write, think about writing, and think about ownership. Unless we are prepared to have discussions with them about the importance of their own voices, and unless we change our assignments to reflect the vital contributions students can make to scholarship, we will not be teaching to this new circumstance.
Additionally, generative AI may heighten what’s been called an “epidemic of loneliness” amongst students. The more proficient AI becomes at imitating human response and knowledge, the higher the risk that students will turn away, not just from their teachers, but from each other when they need support studying or learning. More and more, human connection matters; and so teachers need to develop humane pedagogies, pedagogies for and with and toward the human beings they teach. Everyone everywhere is concerned about what the new AI literacies look like, when perhaps we need to be thinking about the human literacies we are increasingly leaving behind.
What bothers me the most about AI is the attention we’re giving it. Yes, it’s a possible culture-changer; yes, there’s no doubt it will affect education. But the truth is, the more attention we give AI, the less attention we’re going to give people. People like you, like me— divided across oceans and cultures and languages and beliefs. But people who, at the end of the day, are sad, feel happy, are afraid but resilient, love their families and friends, ask questions about our identities, seek justice, do harm, and try to make up for the harm we’ve done.
On the one hand, I don’t care what AI can do or can’t do. What I care about, what I’ve always cared about, is what people can do. What we say, what we want to say, what we need to say, and what we learn by saying it—that’s something particularly human.
The biggest threat posed by generative AI in education is the threat of apathy—the threat that we won’t notice how much of our work it can accomplish, and unquestioningly allowing it to do that work for us—when what we need to be doing is asking why so much of our work can be done by an algorithm.
AI only works because we have let human life become automatable. Teaching was first considered a routine of automatic tasks with the invention of the first learning management system, PLATO—Programmed Logic for Automatic Teaching Operations; and since that time, we’ve thought about teaching more and more as a set of tasks—learning outcomes and objectives, assignments that build to those outcomes, assessments that test for those outcomes—and we think of teaching less and less as a craft.
We assign and grade, assign and grade. The learning management system made it possible to record a lecture and play it again and again for years; it made assignments replicable, assessments replicable… and it convinced us that students themselves are simply reiterations of their predecessors who sat in our classes last semester.
Generative AI offers ways to take the burden of these automated teaching tasks off our plates; and since so much of teaching has become automatic, thanks to the technologies that preceded AI, it’s now possible to let AI do the majority of that work. It can grade for us. It can provide feedback for us. It can design lesson plans, assignments, and assessments. Combine this with the desire for sameness and consistency across classes, and across classes across semesters, and it’s easy to imagine teaching as prompt engineering, where we interact more with a large language model than we do students—and where students do the same in response.
The danger here is that generative AI is one more way by which we widen the distance between ourselves and the students we teach. And it can do that because we have allowed teaching to become automatable.
As I said before: I believe that every new innovation in educational technology is just another opportunity for us to consider what it means to learn and teach as—and be—human. What that means now, with generative AI on the scene, is what it has meant all along.
A wise friend recently pointed out that ChatGPT never hits backspace; but for people, hitting backspace is absolutely necessary. We revise as we write, we revise as we teach, we revise as we relate to each other, we revise when we love. We revise. Because in revision we find new understanding, new ways to express ourselves. We are interested in doing better; we are interested in surprising ourselves; we are interested in becoming proud of our accomplishments, and no accomplishments are ever without their revisions.
At the top of every syllabus I ever drafted, I included a quote from Thomas P. Kasulis that read: “A class is a process, an independent organism with its own goals and dynamics. It is always something more than even the most imaginative lesson plan can predict.” What that always meant to me is that teaching is an act of imagination, of spontaneity, of listening, tangents, emergent possibilities.
I believe this is, in part, what bell hooks was saying when she wrote,
To teach in a manner that respects and cares for the souls of our students is essential if we are to provide the necessary conditions where learning can most deeply and intimately begin. (Teaching To Transgress,14)
Teaching, in other words, isn’t actually automatable. It is, instead, a mechanism for transformation. There is no data set large enough to give us back possibility. There is no lesson plan, no lecture, no rubric that can capture what a student might do, only what we think they should do. AI cannot give us back the human heart, human passion, human imagination, no matter how well engineered our prompt may be.
As Maxine Greene wrote, “our transformative pedagogies must relate both to existing conditions and to something we are trying to bring into being.” Teaching confronts “the human condition itself … the experiences of absurdity we live through when our deepest existential questions are met with blank silences” (Releasing the Imagination, 51). We teachers aren’t meant to fill in those blank silences, but rather to equip students with the capacities, skills, and imaginations they need in order to do that for themselves.
This is why a class is a process, and why learning isn’t replicable, and why we need, always, to hit backspace. Our humanity depends upon the backspace. I have to make mistakes in order to do the right thing. You do too.
And that’s another human peculiarity: the desire to do the right thing. To benefit others. Generative AI isn’t interested in benefiting others. In large part because generative AI has no interests at all. It isn’t sentient; it’s not intelligent. It’s a clever clever dictionary, thesaurus, and encyclopaedia wrapped up in the semblance of a thing that can take action, that can intervene in history—the semblance of a human mind. But it’s only an automaton, without agency, bearing no consequence for its actions, unable to learn without being told to learn.
It does not experience failure, or grief, or loss; it does not worry about wars, about famine, about climate collapse; it doesn’t long for anything, it doesn’t love anything. And perhaps most importantly, it doesn’t question anything, therefore it exercises no judgement.
Failure, grief, loss, worry, longing, love… All of these are at the root of human learning, human ideas. Ironically, or perhaps appropriately, these were at the root of the creation of AI. An engineer's longing, a wish, a creative spark—all things which it cannot return to its creators.