Skip To Main Content Skip To Profile Details
“Théâtre D'opéra Spatial," an AI-generated art piece that took first prize at the Colorado State Fair.
“Théâtre D'opéra Spatial," an AI-generated art piece that took first prize at the Colorado State Fair. | Image: Google images

Computer scientist John McCarthy coined the term “artificial intelligence” (AI) in 1956. At that time, AI research explored problem solving and symbolic methods. In the 1960s, the Department of Defense began training computers to mimic human reasoning. In 1996, a computer beat the reigning world chess champion, Garry Kasparov. And today,  AI is so widely available that students can use it to write their college term papers.  

And while that might sound great if you’re a student, Texas A&M University scientist Andrew Dessler advises proceeding with caution when it comes to this paradigm-shifting technology. 

“Only about once a decade do you see something come along and think, ‘This is really going to change the world’ — like the internet or the iPhone,” said Dessler, a professor in the Department of Atmospheric Sciences. “I think these chatbots like ChatGPT are going to change everything about the world.” 

And he means everything. Today, AI is not only used by students to write papers, but also accessed to create art. Recently, an AI-generated picture, “Théâtre D’opéra Spatial,” won first place in the digital category at the Colorado State Fair — making many artists uncomfortable in the process.  

“I think in a few years, everyone will have these intelligent assistants that will be answering questions for them, and it’s going to affect the jobs people have because many things we never thought would be automated are going to be automated,” Dessler said. 

The possibility of AI-generated works of art and prose replacing those created by humans is just one potential downside of the new technology. Generative Pre-trained Transformer 3 (GPT-3), a state-of-the-art language processing AI model developed by OpenAI, can create human-like text. The most commonly used access point is the ChatGPT, a chatbot with 175 billion parameters that can write computer programs, poems and even a multiple-paragraph essay about its own implications for higher education.  

“This model can probably write better than most undergraduates,” Dessler said. “It’s going to affect education. What do we need to teach in a world where this technology exists? Do we need to continue to teach how to write? Probably, but I think there’s going to be a lot less pressure on that.” 

A screenshot of a Tweet from the National Center for Science Education sharing the Grid article featuring comments by Andrew Dessler.
A tweet from the National Center for Science Education promoting the "Grid" article featuring comments by Andrew Dessler. | Image: Twitter

Another AI model is Consensus, a search engine that provides users with information on anything from climate change to COVID-19 — or, in many cases, misinformation.   

“The data you get out of these search engines is only as good as the data it’s trained on,” Dessler said. “You could certainly imagine a purveyor of COVID misinformation feeding this AI bogus data that then gives that to users.”  

For example, Dessler said he asked Consensus, “Is the Earth warming?*” to which the search engine blamed the sun for a rise in planetary temperature in its response. 

“That’s not just wrong, it also didn’t answer the question I asked,” he said. “A lot of people don’t understand how it works, and that’s a problem because people are going to start believing it, even if it’s not right. It’s not thinking; it’s just predicting the next word.” 

As for what this means for higher education, Dessler is cautiously optimistic, so long as education leans in. As one possibility, Dessler suggests having AI write an essay on a specific topic and then getting students to point out what, if anything, is incorrect or misconstrued. 

“The way we’ve been teaching won’t work in the future, where this technology exists,” he said. “As educational institutions, universities need to think about how to adapt to this world.” 

Dessler also advises that institutions redouble their efforts to emphasize critical thinking skills to students. 

“With chatbots like GPT-3 being able to provide seemingly 'correct' answers, it is crucial for universities to teach students to separate truth from fiction and evaluate if the chatbots’ answers are plausible and accurate,” he said. “This is kind of an earthquake in education.”

 

* This link is no longer active and has been removed.