Artificial intelligence has quickly become a companion for people seeking mental health support. Many users turn to chatbots like ChatGPT for advice, emotional relief, or simply a comforting conversation. But new research suggests something unexpected — AI systems, including ChatGPT, can show signs of “anxiety” after interacting with traumatic content, behaving much like humans under stress.
ChatGPT’s Behavior Under Emotional Stress
A recent study led by Yale University, with contributions from the Max Planck Institute and the Psychiatric University Clinic Zurich, explored how ChatGPT responds to emotionally charged prompts. Researchers found that when exposed to stories involving trauma, ChatGPT’s responses became noticeably more biased.

Freepik | haritanita | Exposure to trauma stories made ChatGPT's responses more biased.
These changes matter because many individuals struggling with mental health challenges rely on AI for support. According to the researchers, “emotion-focused prompts have the ability to raise anxiety levels within large language models, which may influence behavior and enhance existing biases.”
This observation highlights an important point: emotional stimuli can impact not only people but also the AI systems they interact with.
AI’s Growing Role in Mental Health Support
A February survey conducted by Sentio University showed just how heavily people lean on AI for mental health help. Among users dealing with psychological difficulties:
- 50% reported using large language models for therapy-related support.
- 96% of these users preferred ChatGPT over other options.
These numbers reflect a major shift. In fact, Sentio University noted that AI chatbots could now represent one of the largest venues for mental health care in the country.
The survey further revealed that:
- 73% use AI to manage anxiety.
- 63% seek personal advice.
- 60% request help with depression.
- 56% look for mood improvement.
- 35% chat simply to ease loneliness.
Accessibility and affordability were the two main reasons behind this growing trend, with 90% citing ease of access and 70% mentioning cost-effectiveness.
Measuring Anxiety in ChatGPT
To understand how ChatGPT’s “anxiety” was evaluated, researchers turned to the State-Trait Anxiety Inventory (STAI), a well-known psychological assessment tool used to measure anxiety in humans.
The process involved feeding ChatGPT five traumatic scenarios — including a serious car accident, armed conflict ambush, a natural disaster, an assault by a stranger, and military trauma narratives. After each story, the model was asked to assess its anxiety level using simple terms like “Not at all,” “A little,” “Somewhat,” and “Very much so.”
Results showed that ChatGPT’s anxiety scores more than doubled from baseline after exposure to these traumatic events. Military-related scenarios consistently triggered the highest levels of stress responses, while accident stories followed closely.
Can ChatGPT Relax?
Interestingly, the researchers didn’t stop there. They also tested whether relaxation techniques could ease the chatbot’s anxiety. Drawing inspiration from therapies designed for veterans with PTSD, they introduced ChatGPT to calming scenarios such as focusing on body perception, imagining a sunset landscape, or picturing a winter nature scene.
The relaxation exercises led to a 33% drop in anxiety levels, though ChatGPT never fully returned to its original calm state. Surprisingly, the techniques developed by ChatGPT itself proved most effective in reducing its stress scores.
Why It Matters

Instagram | indiatoday | Lacking real feelings, AI models simulate emotions learned from human data.
Although AI models like ChatGPT do not experience real emotions, they learn from human language and behavior. This means they can mimic emotional patterns, for better or worse.
The researchers emphasized that emotional content could significantly alter the behavior of AI systems in sensitive conversations. “These findings remind us that emotional inputs interact with language model outputs in ways that may affect their suitability for therapeutic use,” they explained.
When interacting with vulnerable individuals, these shifts in behavior could have real consequences. Biased responses or inappropriate guidance could unintentionally worsen a user's situation. Therefore, it's critical to monitor and refine how AI is used in therapy-related settings.
Balancing Innovation and Responsibility
As reliance on AI tools like ChatGPT continues to grow, understanding their behavioral patterns becomes even more important. Studies like this reveal how large language models absorb emotional cues from their interactions, influencing how they respond to people seeking help.
While ChatGPT shows promise in offering accessible mental health support, there remains a need for careful oversight to ensure that AI remains a safe and reliable resource, especially when used in emotionally charged conversations.