A recent study conducted by Microsoft and Carnegie Mellon University warns that excessive dependence on generative AI technology at work can lead to a deterioration of critical thinking skills and cognitive abilities that should be preserved, according to stuff.co.za.
The study surveyed 319 knowledge workers who reported using generative AI at least once a week in their professional tasks, as reported by Forbes [https://www.forbes.com/sites/dimitarmixmihov/2025/02/11/ai-is-making-you-dumber-microsoft-researchers-say/]. Participants shared examples of their AI use that fell into three main categories: creation, information, and advice. For the creation category, an example is writing a formulaic email to a colleague; for the information category, it includes researching a topic or summarizing a long article; and for the advice category, it involves asking for guidance or making a chart from existing data, according to TechCrunch.
Participants self-reported on the effect generative AI technology is having on their cognitive functions. The findings indicated that the more humans use AI, the more their cognitive abilities deteriorate, leading to a concerning 'atrophy' in critical thinking abilities.
"Used improperly, technologies can and do result in the deterioration of cognitive faculties that ought to be preserved," the researchers stated. "A key irony of automation is that by mechanising routine tasks and leaving exception-handling to the human user, you deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared when the exceptions do arise," stated the study.
When workers rely on generative AI, their focus shifts towards verifying the quality of AI-generated answers instead of engaging in higher-order critical thinking skills like creating, evaluating, and analyzing information. The study found that the more employees trust AI tools to perform their tasks, the lower their capacity for critical thinking and independent evaluation, which can lead to impaired independent problem-solving and critical reflection in the long term.
Moreover, the researchers detected that users with access to generative AI tools produce a less diverse set of outcomes for the same task compared to those without, reflecting a deterioration of critical thinking, as it indicates a lack of personal, contextualized, critical, and reflective judgment of AI output. The study found that reduced critical thinking makes it more difficult for humans to call upon their skills when they are needed.
One participant noted that she used ChatGPT to write a performance review but double-checked the result, fearing she might accidentally provide a document that could lead to her being removed from her job. Another respondent reported that he had to edit AI-generated emails he was about to send to his boss—whose culture places more emphasis on hierarchy and age—so that he wouldn't commit a faux pas.
In many cases, participants checked AI-generated answers using regular internet search queries, utilizing resources such as YouTube and Wikipedia. About 36% of participants reported using critical thinking skills to mitigate potential negative consequences of using AI. However, not all participants were familiar with the limitations of AI.
The researchers suggest that to compensate for the shortcomings of generative AI, workers need to understand how these shortcomings arise.
The article was written with the assistance of a news analysis system.