What GenAI May Take Away

What GenAI May Take Away

A great deal is being said about the transformative power of generative AI. Digital tools like ChatGPT, Bard, Claude.ai, Otter.ai, and a host of chatbots and AI supplements to existing platforms all promise to change the way we work, learn, enjoy media, create art, and write. New "literacies" have arrived on the scene, foremost amongst them, prompt engineering (i.e., how to tell the AI what to do and get the desired result); but AI literacy is also "about being able to discern the benefits and challenges of AI while making informed decisions about its use." AI literacy is about the facts of AI, if you will, even as they are clouded by hype, and the promises of those selling this latest technology.

What's interesting about generative AI and its industry here at the early stages of the hype cycle is that "literacy" is often confused with "mastery" (a common conflation in education, which might be analogous to the difference between playing the piano by ear versus playing according to sheet music). Literacy demands a deep understanding of a language, mechanism, technology, endeavour, profession, etc., whereas mastery is an exhibition of a learned set of skills. Literacy does not always lead to using a technology, or ascribing to a set of practices, whereas mastery is pointed directly at adoption. With regards to AI, then, I would argue that understanding prompt engineering is a specific mastery, whereas discerning the benefits, challenges, history, cultural context, and the why and whether of using AI—rather than the how—is literacy.

For example, how does generative AI, developed largely by Western engineers, exhibit particularly Western bias? How will it account, or not account, for human experiences not shared by those who developed it and trained it? Generative AI isn't technically generative, it doesn't create new ideas, but rather floats hypothetical information based upon existing knowledge. We must understand how it's been trained if we're to understand what its assumptions are. How do we detect those assumptions from its output? How do we correct those assumptions?

And more: what does generative AI presume about human creativity? Edward Tian, the founder of GPTZero, pointed out to me once that "AI never hits backspace," and yet hitting backspace is integral to the process of creation. I've said elsewhere that:

For humans, hitting backspace is absolutely necessary. We revise as we write, we revise as we teach, we revise as we relate to each other, we revise when we love. We revise. Because in revision we find new understanding, new ways to express ourselves. 

So, where the generative AI marketplace is full of messages about what the technology can give us, I'm worried about what the technology takes away. If generative AI usurps the backspace—that pause, that caesura in the writing process into which new ideas can fall—what else does it appropriate? And how does the messaging around AI make appropriation sound like a convenience?

Most technology, like genAI, leans into a perceived (and sometimes manufactured) need for efficiency. Taking the backspace out of writing makes it faster, more certain; AI tutors allow students to ask endless questions, engage diverse (simulated) perspectives, and rabbit-hole their way into learning—all on their own time, rather than waiting for office hours. Claims that generative AI will make learning more personalised are based on two assumptions (at least): first, that personal interaction and algorithmic response are, as my grandfather used to say, six of one, half-dozen of another; and second, that AI-supported efficiency will leave teachers time to build better relationships with students.

To the first point, personalisation should require people. Pati Ruiz and Kip Glazer write that:

AI is not human, and we should not be using human-related terms to refer to these systems and tools because that can lead to misconceptions that cause harm not just to our students but to our communities as well.

Generative AI cannot replace a teacher who can fine-tune lessons and information for students—primarily because when we talk about personalisation, we should be talking about care. A student who moves faster or slower than other students, for whom the curriculum is either not enough or too much, or whose interests span more than the lesson plan can predict is a student for whom learning will require specific attention from a teacher. AI can respond to prompts; a teacher can see the whole student. These two capabilities aren't remotely comparable.

Admittedly, personalisation has always been a bit of a white whale for education. Standardisation of learning outcomes and assessments are part of the engine of accreditation; and, personalisation takes time.

Which brings us to the next claim about generative AI: that its time-saving efficiencies will give teachers more availability to work with students and establish meaningful connections with them."Unfortunately," writes Julia Freeland Fisher,

appealing as the proposition may be, it rests on the faulty assumption that schools are designed to optimize for connection in the first place. While most educators would wholeheartedly agree that relationships matter, schools rarely measure students’ connections – with educators, peers, or community members.

What that means is that, while generative AI may save teachers time, it does nothing to change an environment where teachers and students are increasingly isolated by their roles, responsibilities, and power. "The teacher teaches and the students are taught," Paulo Freire reminds us in Pedagogy of the Oppressed. "The teacher talks, and the students listen—meekly ... The teacher chooses and enforces his choice, and the students comply" (p. 54). Generative AI is not going to revolutionise that relationship; in fact, it doubles down on the dynamic by providing students substitute teachers and tutors with whom no relationship whatever can be established.

Ironically, a recent report from Cengage and Bay View Analytics claims that "GenAI could be the remedy to ongoing challenges from teacher shortages and crowded classrooms to democratizing access to higher education through lower-cost options." How would it do this? By providing tools that make crowded classrooms feasible, and by supplementing teachers with AI tools that will take on some of the burdens of teaching. The concern here, for me, is that generative AI will silence the much-needed conversations about teacher shortages, labour, fair treatment and pay, and the need for learning to centre the learner.

So, it's not just the backspace; generative AI may subsume some of the most important conversations that we can have about education.

I'm not condemning AI, to be clear; but I am calling for a critical response to it, and a recognition of its location with regard to the rest of education, and its impact on teachers. And more: we need to be asking the question "Do we need generative AI?" My answer is "no," we don't need it; and if that's the case, how do we want to use it?

In all the years that I worked with teachers—from faculty to administrators to instructional designers—I believe the experience I witnessed most in them was the experience of despair. Over and over, I heard stories from teachers about being left out of the adoption of tools by their districts and universities; while there were brave stories of teachers "hacking" those technologies in order to preserve their pedagogies and their relationships (alway tenuous) with students, the overwhelming feeling was of uncertainty, even helplessness.

ChatGPT, which started the cascade of generative AI tools over the last year or so, was released on the heels of the Covid-19 pandemic. Just as teachers were getting their feet under them, AI entered the room and changed the play again. And just like with the pandemic, AI requires us to take a pause, examine our teaching and what really matters, ask important questions about why we teach and how—and to remember our capacity to make choices.

My work has always been, and remains today, empowering teachers in their vocations. I am most interested in freeing their imaginations, bolstering their critical pedagogies, and encouraging, encouraging, encouraging. From my perspective, the advent of generative AI requires an engagement of the imagination—not in order to partake in its panacea of efficiency, but to respond to the force of its presence.