Skip to content
Notes on Teaching and Learning

What is “Better”? Writing and Generative AI

May 4, 2024
Amanda Leary
Close up of a woman's left hand resting on a laptop keyboard while her right hand holds a pen in the background.

I recently had a student turn in a rough draft of an essay that was fully (and uncited-ly) AI-generated. There were tell-tale signs: the use of certain words and structures that are indicative of AI’s writing style, and the essay was inconsistent with the student’s previous work. My AI policy clearly lays out the acceptable uses of large language models for my class: 

TL, DR: If you use an AI program to generate ideas or brainstorm, it should be cited like any other reference material. You may not submit any work generated by an AI program as your own.

I met with this student to discuss their process of writing the paper and how they used AI. It went something like this:

  • Give ChatGPT preliminary paragraphs that outline the student’s thoughts on the topic 
  • Prompt it to edit the paragraphs and make the writing better
  • Ask what else it needed to finish the essay so that would meet the requirements of the essay
  • Read sources, provide summaries and quotations for ChatGPT to integrate
  • Provide further refining prompts until the student is happy 

This process, they said, took them two weeks. 

When I asked why they had chosen to put so much time and effort into using AI, violating course policy and risking their grade, my student (who happens to be an international second-language speaker) told me that they felt the AI-generated writing was “better” than anything they could have produced on their own; they thought I would prefer to read more polished writing than their own words.

As a composition instructor, I’ve been sitting with this concept of better. I want to see my students’ thinking develop from rough to final draft; to see how they navigate writing for a particular rhetorical situation (purpose, audience, context); to see how they respond to feedback; to see how well they can incorporate and analyze quotations, synthesize sources, construct a thesis, and structure an essay. Offloading that work to AI, even through directed prompts that ask AI to do those things under the student’s guidance, doesn’t quite feel like an authentic demonstration of their writing’s process of becoming better. 

After I explained that to my student in our meeting, they seemed to understand (and I definitely took away the need to be more transparent about the purpose of this researched argument essay). But ready or not, AI is here to stay; understanding how our students are using it—and what AI is actually doing—is key to addressing student use and identifying ways to incorporate it into the classroom. 

For me, this begins with the question of better—is AI-generated or AI-edited writing better than an undergraduate’s? 

I started by asking ChatGPT-3.5 what actions it takes when prompted to edit a piece of writing:

Screenshot of ChatGPT-3.5 that explains the steps taken when asked to edit writing.

Many of these involve potentially substantial changes to the inputted text which, as a writing instructor, I find deeply concerning. I want my students to develop an eye for flaws in the logic of their writing. If ChatGPT is “rephrasing sentences, restructuring paragraphs,” and “clarifying ambiguous points” for students, where exactly am I seeing their original tone and intent? Maybe that “awkward phrasing” is a stylistic choice on behalf of the author; if it is removed because it isn’t aligned with what the model’s algorithm anticipates—and if students take these edits uncritically, without the skills, expertise, and confidence in their own writing that practice fosters—how are they developing their own unique style and voice?

To see what AI-edited writing actually looked like, I tested it with a sample of my own writing in ChatGPT 3.5, which is free to use. You can see the full thread here. (This sample is taken from my time as an undergraduate at Valparaiso University.)

I then asked it to summarize the changes it made:

Screenshot of ChatGPT-3.5 explaining the changes it made to a piece of writing when asked to edit.

The general ideas of the original are present in the edited version, but the way that ChatGPT has combined and rearranged sentences doesn’t retain the emphasis of the original:

Original: The Odyssey is not, however, the only epic to continue the story of Troy and its war with the Achaeans. Virgil’s much later Roman epic, The Aeneid, also chronicles the journey of a hero of the Trojan War. Only, this time, the Trojan Aeneas is the concern of poets, not the Greek Odysseus. 

Edited: However, The Odyssey isn’t the sole epic to delve into the aftermath of Troy and its conflict with the Achaeans. Virgil’s later Roman epic, The Aeneid, also follows the journey of a Trojan War hero, Aeneas. Yet, this time, the focus shifts to the Trojan Aeneas rather than the Greek Odysseus. 

Here, in its attempt to refine the transitions between ideas, ChatGPT has moved “however” to the beginning of the sentence. But in so doing, the sentence loses its strong sense of contrast and is, in my opinion, a weaker transition. By eliminating the phrase “the concern of poets,” some of the style and voice of the author is lost in favor of concision.

Some of the word choices ChatGPT makes are also unnecessary:

Original: Achilles is a hard, vengeful, proud, stubborn, ruthless man whom not even his friends’ pleas can move. Achilles’ battle prowess is undeniable, yet he is without mercy. 

Edited: Achilles appears as a resolute, vengeful, proud, and obstinate figure, impervious to the pleas of his comrades. Undoubtedly, he boasts unparalleled prowess in battle, yet his lack of compassion is glaring. 

“Unparalleled prowess in battle” and “lack of compassion” aren’t any more precise or vivid phrases than the original—in fact, I read them as florid and inauthentic to the original tone.

In conjunction with in-class exercises that help students evaluate the changes made to their writing and incorporate revisions on a case-by-case basis, I can see the benefit of using AI to edit writing. But most of us would have a problem with a student copying and pasting generated content with that level of editing and submitting it as their own. I would accept some of the changes ChatGPT made, and others have given me ideas for how I can revise the passage myself to improve the overall structure, but the edited version isn’t any better than what I had written as an undergraduate. Helping students develop this skill is an important part of writing instruction and is clearly aligned with my goals for the course. That, in my view, makes writing better: the ability to critically assess and incorporate feedback in order to improve, but also to explain and justify stylistic choices and develop an authentic voice as a writer. 

I will be incorporating AI into my composition classes in the future. Teaching students to use AI tools ethically and effectively is an important part of preparing them for their future careers. But for me, equally important is students developing skills and confidence as writers. Large language models will be a part of that development, but students will still be responsible for making their own writing better.