A professor of English says AI is training us all to write the same way and the evidence is already everywhere

Published on Mar 26, 2026 at 7:38 PM (UTC+4)
by Author Claire Reid
Last updated on Mar 26, 2026 at 4:26 PM (UTC+4) · Edited by Emma Matthews
A professor of English says AI is training us all to write the same way and the evidence is already everywhere

An English professor has warned that AI is making us all write using the same voice, and says it’s already become so routine that you see it everywhere.

There’s been an explosion of AI in recent years, following the launch of ChatGPT back in 2022. 

Now, millions of people around the world think nothing of turning to a chatbot for help with writing emails or making their social media posts sound a little more polished. 

But one professor of English has warned that the reliance on chatbots could see us losing our unique voice, and says that there are already plenty of examples out there. 

Enter our competition to win a stunning 2006 Ford GT or $400,000 cash!

The professor has warned that AI is making all writing sound the same

There have been a couple of recent studies that have shared some interesting insights on using artificial intelligence in the workplace.

One found that rather than making your workload easier, an over-reliance on AI tools can mean you’re having to work harder, while another found that too much artificial intelligence can lead to ‘brain fry’

Now, University of Pittsburgh English professor Gayle Rogers has warned that too much generative AI is taking away people’s unique voices when it comes to writing. 

Rogers said that predictive language technologies, including chatbots, are ‘so routine’ that most folks hardly even notice them anymore. 

However, he went on to say that using the likes of Claude or ChatGPT can present challenges when it comes to individual expression. 

“The ubiquity of predictive language technologies directly threatens human creativity,” he wrote for The Conversation

The professor explained that large language models (LLMs) compose text in very standardized and predictable patterns, and this often manifests itself in a singular voice. 

This explains why you can often spot when something is written by artificial intelligence, even if there’s not a smattering of em dashes or the typical ‘it’s not X, it’s Y’ pattern; it’s simply because it all sounds like the same person wrote it.

Because, effectively, they have.

He’s not the first professor to express concerns

Rogers isn’t the only professor to issue a warning cry about AI use. 

Harvard Professor Avi Loeb said he had spotted a concerning trend among ‘excessive’ users. 

“Recently, I noticed that some people around me are starting to lose their cognitive abilities as a result of excessive use of Artificial Intelligence platforms, such as ChatGPT, Claude, or Gemini,” Loeb wrote.

“This phenomenon resembles muscle loss from excessive use of public transportation as a substitute for walking. 

“In academia, the only reliable way of testing the cognitive abilities of students right now is by placing them in a Faraday cage.”

And his thoughts have been supported by a study from Switzerland. 

Researchers set out to find out if there was a link between using artificial intelligence on a regular basis and critical thinking skills, and came to the conclusion that there were indeed ‘potential cognitive costs of AI’. 

“Our research demonstrates a significant negative correlation between the frequent use of AI tools and critical thinking abilities,” the authors concluded. 

DISCOVER SBX CARS: The global premium car auction platform powered by Supercar Blondie

Follow topics and authors from this story to see more like this in your personalised homepage feed and to receive email updates.