Educating Beyond the Bots

Educating Beyond the Bots

The current discourse on artificial intelligence not only reflects a narrow view of education. It also represents romanticization of, or alarmism about, new technologies, while insulting students as dishonest by default.

“It saved me 50 hours on a coding project,” one of my students whispered to me in class recently. He used the artificial intelligence tool called ChatGPT for a web project. His classmates wrote feedback on his reading response for the day and tested a rubric they jointly generated for how to effectively summarize and respond to an academic text.

The class also observed ChatGPT’s version of the rubric and agreed that there was some value in “giving it a glimpse into the learning process.” But they decided that their own brain muscles needed to be developed by grappling with the process of reading and summarizing, synthesizing and analyzing, and learning to take intellectual positions, often through an emotionally felt experience. Our brain muscles could not be developed, the class concluded, by simply watching content collected from the internet by a bot, however good it was. When the class finished writing, they shared their often brutal assessment of the volunteer writer’s response to the reading. The class learned by practicing, not asking for an answer.

Beyond the classroom, however, the discourse about artificial intelligence tools that “do writing” has not yet become as nuanced as it has among my college students. “The college essay is dead,” Stephen Marche of The Atlantic recently declared. This argument is based on a serious but common misunderstanding of a means of education as an end. The essay embodies a complex process and experience that teaches many useful skills. This is not a simple product.

But that misunderstanding is only the tip of an iceberg. The current discourse on artificial intelligence not only reflects a shrunken view of education. It also represents a constant romanticization of, or alarmism about, new technologies affecting education. And saddest for educators like me, it shows a contempt for students as dishonest by default.

Broaden the view of education

If we focus on writing as a process and learning tool, it is good to kill the essay as a mere product. It’s great if bot-generated texts serve certain purposes. Previous generations used templates for letters and memos, not to mention forms to fill out. New generations will adapt to more content they didn’t write.

What bots should not replace is the need for us to grow and use our own mind and conscience, to judge when we can or should use a bot and how and why. Teachers must teach students how to use language based on contextual, nuanced and sensitive understanding of the world. Students must learn to think for themselves, with and without the use of bots.

One simple approach to using AI tools in the classroom is to start by having students see what the tools suggest or to compose/do the writing/thinking themselves first. Either way, teachers can allow students to then compare AI-aided and unaided versions of their work, using independent thinking. Next, students should be asked to figure out how to cite any AI-produced text or ideas they borrow in their writing/work. Finally, in writing or with a class discussion, students should be asked to think critically about the use of AI – the what, how and why. I call this the C/C3C approach: check with AI or compose yourself first, then compare the two versions, then quote anything borrowed from AI, and always reflect critically (in writing or discussion). By learning to use the tools and reflecting on the technical, ethical, and other important issues involved, students are best prepared for both effective and conscientious use of AI in their lives and careers. Students should learn to use it to save time and energy, expand knowledge and perspective, and increase their efforts and skills. But they must not circumvent learning and must also be mindful of any ethical and political issues involved.

Don’t dismiss or romanticize technology

The other common response rejects or romanticizes new technology: the pendulum swing between technology as a maker of either utopia or dystopia. Extreme reactions dominate our discourses about technology (with little in the middle). “Here’s How Teachers Can Block ChatGPT,” Markham Heid argued in the Washington Post, [have students write] “Handwritten Essays.” Heid suggests that we run away from technology if it undermines something we value. He advises teachers to go back to internetless writing or even handwriting, listing their advantages.

But escape is not a solution. Nor is some tech hero going to rise up and “save” us all from the evils of chatbots and fraud. We must involve disruptions of the status quo, and utilize new tools for individual and social good.

The other extreme is the romanticization of technology. “Has AI reached the point where a software program works better than you can?” asks the title of an NPR radio interview. It implies that we are competing with technology, which will win, and there is nothing we can do about an invincible force. The guest, a UPenn business professor, discusses how he uses ChatGPT to automate as much of his work as possible. He lets the bot generate syllabuses, assignments, assessment rubrics and lectures. AI “is going to replace us all,” he says.

The tendency to romanticize new technology also undermines our understanding of it. Bots can help save time, but can bot-based materials and methods help educators prepare students to understand and think, create and communicate, lead and manage effectively? What ethical and professional values ​​will bot-dependent teaching convey? Why not instead learn to design to test, question and help improve the technology? How can we compare human work, communication, knowledge and relationship with bots’ equivalent of them? Language bots generate texts based on plausible patterns pulled from the Internet, which is often dangerous, however extensive and well-constructed their corpus may be. So why not focus on where they fail and why, when human agency and conscience must intervene and how?

Learn with confidence

Public discourse about education remains skewed for a third reason. There is a widespread belief that students cheat when they can. This is offensive.

Students mostly cheat when they don’t find an assignment worth the time and effort, aren’t motivated, or don’t have the skills. And all these factors are within the scope of good teaching. Even the unexpectedly dishonest few deserve to be educated on the whys and whats and hows of assignments they are asked to do. Only the hopeless moralist could consider disengagement more acceptable than dishonesty.

The only cure for mistrust and dishonesty is to ensure motivation among students to do their own work. Students who appreciate the educational goals behind writing-intensive assignments are eager to use AI tools to generate topics and themes, ask and answer questions, to spark their own critical and creative thought processes. They earn assignments and credits for learning how to use emerging tools to get the job done, how to assess the tools and the implications of their design and use and misuse. AI is going to be embedded in everyday tools we use, such as word processors and communication devices. It is time to teach students how to help address the dangers that new and powerful technologies pose to people and social systems (such as when police or governments, doctors or managers, corporations or individuals cause harm by ignoring their mistakes) . Students should learn to use AI tools to generate and assess content, brainstorm ideas and further explore the process points. If anything, AI exposes the need for a human touch to our communications. It calls for confidence in teaching.

Yes, college professors who do not eagerly teach and inspire students are more likely to be “victims” of rapid advancement of natural language processing aids. ChatGPT could be the new conspiratorial cousin just a mouse click away—at no cost and reduced potential for embarrassment. But then ghostwriters and paper mills, patchwork papers and talking point essays are long gone. All previous “technologies” of academic dishonesty should have awakened, or already displaced, every professor. Effective educators give students credit for the process and experiences of, and skills learned for, researching and reading, evaluating sources and synthesizing ideas, developing and sharing their own intellectual positions, citing and engaging sources, addressing and promoting complex perspectives.

New technologies are bound to exacerbate old problems. But they also help to solve them, and new ones. Artificial intelligence certainly poses increasing risks to education and other domains of society. Educators should encourage students to use AI to brainstorm, gather information and compare it to more methodically found and analyzed library resources, to find errors and gaps in AI-generated answers to their questions, to analyze and trying to understand how the AI ​​works and what practical and ethical dangers there may be in using/trusting it, and so on.

Students are quick to assess the value and risks, uses and abuses of breakthrough technologies. Educators should be too.

Leave a Reply

Your email address will not be published. Required fields are marked *