
For Dr. Joshua DiCaglio, an associate professor in the Department of English at Texas A&M University, the conversation around GenAI mirrors earlier anxieties around Wikipedia. These concerns were driven by the often mysterious processes through which articles were created. Wikipedia’s "anyone can edit" approach drove fears about accuracy and the lack of recognizable expertise; these same concerns circulate in ongoing discussions surrounding GenAI in higher education.
DiCaglio notes that both conversations illustrate how concerns about new technologies of writing don’t often acknowledge the complexity of writing and editing. Writing is almost always produced and read within multiple contexts, involving a range of interests that require careful, intelligent deliberation. Even with new writing technologies, someone must navigate these multiple audiences and complex conceptual questions that inevitably arise in most writing situations.
This was the subject of DiCaglio’s recent article, "Wikipedia as Editorial Microcosm," co-authored with Texas A&M graduate students Jesse Cortez and Gwendolyn Inocencio and Texas A&M undergraduates Hannah Mailhos and Connor Hearron. In this work, DiCaglio catalogs and analyzes more than a hundred “stalled articles” on Wikipedia: articles that have stopped improving despite having significant problems in organization, focus of the content, encyclopedic style and other writing issues. He shows how, in order for anyone to move these articles forward, they require advanced editing work that navigates the many ways mostly-correct information might be situated, organized and composed.

In working on this issue on Wikipedia, DiCaglio had students practice such interventions in a technical editing class in order to teach them to work with these complexities. As co-author and Texas A&M English graduate student Gwendolyn Inocencio explains, “Working with Wikipedia's stalled articles challenges student editors to diagnose, 'unstick' and revitalize content.” In much the same way, notes DiCaglio, GenAI does not avoid the complexity of writing, but rather increases the need for editors and writers willing and able to consider where and how writing functions.
Another historical and cultural precedent to current GenAI anxieties is found in the work of fellow Texas A&M Associate Professor of English Dr. Andrew Pilsch. For Pilsch, GenAI is only the latest iteration of the ways we frequently ignore the material labor and ecological effects that always accompany writing. A scholar of digital rhetoric and science fiction, Pilsch is working on a book, entitled The Radical 90s, that examines how the idea of the disembodied computer fueled early optimism around computing technologies within the first decades of the Internet. Pilsch argues that present iterations of digital technology help us to see the material effects of computing technologies, such as the devastating ecological costs of global-scale computing in general and GenAI in particular. These material relationships are only magnified in digital and automated writing, which expand the complex contexts in which writing must emerge and operate.
These contexts also include the way writing technologies tie us into new networks of information that put us in contact with new forms of “data” that we must interpret and respond to. These questions are raised by Assistant Professor of English Dr. Jason Crider, whose book project, Prosthetic Rhetoric, examines medical technologies such as glucose trackers as data gathering and writing technologies. Crider sees these technologies as another unlikely parallel to the questions around GenAI. He argues that, while data gathering and tracking technologies seem to provide concrete, objective data about our bodies, they are also ambiguous, rather like all writing, both in terms of what they tell us about our bodies and what we might do in response. A broader and more rigorous understanding of writing prompts us to examine how such information is produced, what it does, and what we might do with it.
To put the issue another way, behind GenAI is a whole complex network of meanings and materials–just like any data or information or language that we read or write. Associate Professor of English Dr. Sarah Potvin adds that a portion of our anxieties about GenAI arise because it disrupts our assumption that humans are the sole creators and consumers of texts. In her work in Digital Humanities, Potvin examines how cultural heritage collections become data sets; in doing so, she traces how the disruption of human agency creates concrete ethical, legal, and social issues related to how information circulates. In another project, she explores how GenAI is one of many new technologies that challenge the function and value of copyright and open access since our knowledge commons becomes the basis for the large language model (LLM) grazing that drives GenAI. “The scrambling of these assumptions about human agency,” Potvin argues, “can be a generative moment. Building on previous critiques of open access as reflecting and even driving existing and racialized institutional inequalities, for example, concerns about GenAI may push us towards systems that better deliver on their promise of promoting equity.”
The scrambling of these assumptions about human agency can be a generative moment. Building on previous critiques of open access as reflecting and even driving existing and racialized institutional inequalities, for example, concerns about GenAI may push us towards systems that better deliver on their promise of promoting equity.
If GenAI continues to disrupt our sense of authorship, it similarly creates questions about reading. GenAI and many other technologies must in some sense “read” other texts in order to generate new texts. “But what does it mean to automate reading?,” asks Dr. Tyler Shoemaker, a newly hired assistant professor in Texas A&M English. Set to join the department in 2025, Shoemaker is currently a fellow at Dartmouth College’s Neukom Institute, where he is working on a book manuscript, Literalism: Reading Machines Reading, that examines this issue of what we mean by “reading” if we say that machines read.
For Shoemaker, GenAI raises questions already at the core of the history of literary criticism. As Shoemaker explains, many of the conversations about textual form and interpretive method held in English departments apply directly to these new AI systems. Data curation efforts, for example, can stand to benefit from bibliographic criticism; content moderation from the abiding concern with context, a hallmark of modern literary study. "But where, with GenAI, our conversations in English don't seem to apply," he adds, "is where things get especially interesting. There is much we can contribute to broader understandings of GenAI systems, but GenAI may itself lead to new ways of doing literary study.”
These parallels do not diminish the significant implications of GenAI for how we read and write, but reassert the need for more advanced instruction and careful work in these areas. The question is: can GenAI replace these tasks and skills? As is increasingly well documented, GenAI struggles with exactly these kinds of advanced writing and reading tasks. In response, many scholars are working to make the gaps in what AI can do explicit to both students and writers. In two recent publications, "The Paranoid Memorandum: A Generative AI Exercise for Professional Communication" and "POSIWID Writing Pedagogy," Crider describes writing assignments that leverage GenAI and students' anxieties about GenAI and plagiarism to explore how authors construct authorial voice and authority within their writing. Crider has presented some of these strategies at a series of popular lectures to the Humanities Texas Teacher Professional Development Institute that helps K-12 teachers in Texas understand how to work with GenAI in their classrooms. In addition, Crider and Pilsch have provided some preliminary guidance on GenAI in the classroom in the "Department of English Statement on Generative AI and Writing."
The smooth and even lovely writing produced by ChatGPT is typically vacuous if not outright wrong, whereas really trying to get ChatGPT to work through a problem gives rise to solecisms, misuse of prepositions, diction errors, etc.
If we explore more carefully what GenAI can or cannot do, writers may find it can function as another tool in the writing process. Professor of English Dr. Laura Mandell, a scholar in Digital Humanities, argues that “writing with GenAI will evolve just as math classes evolved to allow students to use calculators.” Mandell has also developed an assignment using GenAI for her course, Human Thinking and Digital Culture, in which students are introduced to the kinds of human thinking systems like GenAI cannot do (for instance, move up and down in levels of abstraction). This assignment requires students to break ChatGPT — that is, find its weaknesses — and bases the assessment on the student's ability to see and reflect on these limits. In her experience, Mandell notes that “the smooth and even lovely writing produced by ChatGPT is typically vacuous if not outright wrong, whereas really trying to get ChatGPT to work through a problem gives rise to solecisms, misuse of prepositions, diction errors, etc.” Once students realize that they can’t avoid doing the tough thinking and composing themselves, Mandell then encourages students to use GenAI as an additional tool for revising writing.

This initial work with students indicates that we are continuing to adjust to the challenges presented by new writing technologies. What many have long called “information literacy” continues to evolve, both in relation to these more general issues about how we make meaning and compose texts, and in relation to more particular issues such as copyright, privacy, and citation practices. Associate Professor of English Dr. Kathy Anders has been optimistic about our ability to make these adjustments, noting that, in her experience, "students are very perceptive concerning the need for caution and deliberation in the use of GenAI in professional writing. For example, all of the student groups I have worked with in healthcare majors have identified concerns about privacy and accuracy in any use of GenAI for patient charting and other types of healthcare communications.”
Of course, there is still a lot to learn about these new technologies, and they will undoubtedly continue to develop, presenting further challenges. The emergence of GenAI presents a perhaps singular crisis in higher education, especially concerning instruction of writing and reading and especially within English departments. Researchers and teachers in the Texas A&M Department of English are working hard to imagine and create a future in which we understand how GenAI may function as a tool, even if it transforms the way we read and write. As they train the next generations of writers and readers to meet a world radically altered by technology, they are reminding students that, even surrounded by these technologies, we’re still unavoidably using language to convey the central concerns of the human experience.