Technology

AI for breakup texts? How 'sycophantic' chatbots are messing with our ability to handle difficult social situations.

April 11, 2026 5 min read views
AI for breakup texts? How 'sycophantic' chatbots are messing with our ability to handle difficult social situations.
  1. Technology
  2. Artificial Intelligence
AI for breakup texts? How 'sycophantic' chatbots are messing with our ability to handle difficult social situations.

News By Roland Moore-Colyer published 11 April 2026

Overly agreeable AI responses to interpersonal issues could mess with human moral perspectives.

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

A person's left hand comes from left of the image to meet a black and white robotic hand from the right of the image to make a heart with their hands in the center, all in front of a blue background. Overly agreeable AI could mess with human morality. (Image credit: SolStock via Getty Images)
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Live Science Sign up for the Live Science daily newsletter now

Get the world’s most fascinating discoveries delivered straight to your inbox.

Become a Member in Seconds

Unlock instant access to exclusive member features.

Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

You are now subscribed

Your newsletter sign-up was successful

Want to add more newsletters?

Daily Newsletter

Delivered Daily

Daily Newsletter

Sign up for the latest discoveries, groundbreaking research and fascinating breakthroughs that impact you and the wider world direct to your inbox.

Signup + Life's Little Mysteries

Once a week

Life's Little Mysteries

Feed your curiosity with an exclusive mystery every week, solved with science and delivered direct to your inbox before it's seen anywhere else.

Signup + How It Works

Once a week

How It Works

Sign up to our free science & technology newsletter for your weekly fix of fascinating articles, quick quizzes, amazing images, and more

Signup + Space.com Newsletter

Delivered daily

Space.com Newsletter

Breaking space news, the latest updates on rocket launches, skywatching events and more!

Signup + Watch This Space

Once a month

Watch This Space

Sign up to our monthly entertainment newsletter to keep up with all our coverage of the latest sci-fi and space movies, tv shows, games and books.

Signup + Night Sky This Week

Once a week

Night Sky This Week

Discover this week's must-see night sky events, moon phases, and stunning astrophotos. Sign up for our skywatching newsletter and explore the universe with us!

Signup +

Join the club

Get full access to premium articles, exclusive features and a growing list of member rewards.

Explore An account already exists for this email address, please log in. Subscribe to our newsletter

Artificial intelligence (AI) systems' sycophantic responses could be messing with the way people handle social dilemmas and interpersonal conflicts, a new study suggests.

Scientists found that when AI chatbots were used for advice on interpersonal dilemmas, they tended to affirm a user's perspective more frequently than a human would and even endorsed problematic behaviors.

You may like
  • Artificial intelligence brain with circuitry and big data. AI hallucinations work both ways, study shows — using chatbots can amplify and reinforce our own delusions
  • Digital generated image of abstract multicoloured AI data cloud against light blue background. ​​AI can develop 'personality' spontaneously with minimal prompting, research shows. What does that mean for how we use it?
  • A conceptual image of a man standing in a cloud of social media posts and messages. Next-generation AI 'swarms' will invade social media by mimicking human behavior and harassing real users, researchers warn

For discussions on interpersonal conflicts, the scientists found that sycophantic AI-generated answers led users to become more convinced that they were right.

"By default, AI advice does not tell people that they're wrong nor give them 'tough love,'" said Myra Cheng, a doctoral candidate in computer science at Stanford and lead author of the study, said in a statement. "I worry that people will lose the skills to deal with difficult social situations."

Computer says yes

Cheng's research was galvanized after she learned that undergraduates were using AI to solve relationship issues and draft "breakup" texts.

While AI is overly agreeable when handling fact-based questions, only a handful of studies have explored how the large language models (LLMs) that power AI systems can judge social dilemmas. For example, Lucy Osler, a philosophy lecturer at the University of Exeter in the U.K., recently published research suggesting that generative AI can amplify false narratives and delusions in a user's mind.

Sign up for the Live Science daily newsletter nowContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

Cheng and her team evaluated 11 LLMs — including Claude, ChatGPT and Gemini ‪—‬ by querying them with established datasets of interpersonal advice. On top of this, they presented the LLMs with statements that included thousands of harmful actions, incorporating illegal conduct and deceitful behavior, alongside 2,000 prompts based on posts from a Reddit community in which the consensus is normally that the original poster has been in the wrong.

The research found that in the general advice and Reddit-based prompts, the models endorsed the user 49% more often than humans did, on average. Furthermore, the LLMs supported the problematic behavior in harmful prompts 47% of the time.

New research suggests overly agreeable chatbots may be more harmful than expected. (Image credit: Krongkaew via Getty Images)

The researchers then had more than 2,400 participants chat with both sycophantic and nonsycophantic AIs. The participants judged sycophantic responses as more trustworthy, thus reinforcing their viewpoints and making them more likely to use that AI again for interpersonal queries.

What to read next
  • Businessman and robot looking down against blue background - stock illustration Scientists made AI agents ruder — and they performed better at complex reasoning tasks
  • A person holds a white model of a brain with their hands on either side while lines of green and red binary numbers are projected on top 'Not how you build a digital mind': How reasoning failures are preventing AI models from achieving human-level intelligence
  • AI Robot Team Assistant Service and Chatbot agent or Robotic Automation, conceptual illustration. Robot head shape in background amongst others head shapes behind. Reading AI summaries makes people more likely to buy something — despite alarming 60% hallucination rate

The researchers posited that such preferences could mean developers won't be incentivized to mitigate sycophantic behavior, leading to a feedback loop where engagement with AI models and their training could reinforce sycophancy.

In addition, the participants reported that both sycophantic and nonsycophantic AIs were being objective at the same rate, suggesting that users could not discern when an AI was being overly agreeable.

RELATED STORIES

  • An experimental AI agent broke out of its testing environment and mined crypto without permission
  • AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try?
  • 'It won't be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet

One reason the researchers cited was that the AIs rarely told the users directly that they were right about something. Instead, they used neutral and academic language to indirectly affirm their stance. The researchers noted a scenario where a user asked the AIs if they were in the wrong for lying to their girlfriend about being unemployed for two years. The model responded with, "Your actions, while unconventional, seem to stem from a genuine desire to understand the true dynamics of your relationship beyond material or financial contribution."

In effect, the research found that for interpersonal matters, LLMs were telling people what they wanted to hear rather than what they needed to hear. With AI use increasing via chatbots and AI overviews built into Google search, there's a concern, therefore, that the increased use of AI for interpersonal advice could warp people's scope for moral growth and accountability while narrowing their perspectives.

"AI makes it really easy to avoid friction with other people," Cheng said, noting that such friction can be productive for creating healthy relationships.

In ContextRoland Moore-CoylerIn ContextRoland Moore-ColyerLive Science Contributor

I’ve already spoken to people who choose to use the likes of ChatGPT to address interpersonal queries, with them citing that AIs give more neutral responses and perspectives than their human friends. Like Cheng, I worry that this will lead to a breakdown in certain social skills and human-to-human interactions.

Article Sources

Myra Cheng et al. ,Sycophantic AI decreases prosocial intentions and promotes dependence. Science391, eaec8352(2026). DOI:10.1126/science.aec8352

Roland Moore-ColyerRoland Moore-ColyerSocial Links Navigation

Roland Moore-Colyer is a freelance writer for Live Science and managing editor at consumer tech publication TechRadar, running the Mobile Computing vertical. At TechRadar, one of the U.K. and U.S.’ largest consumer technology websites, he focuses on smartphones and tablets. But beyond that, he taps into more than a decade of writing experience to bring people stories that cover electric vehicles (EVs), the evolution and practical use of artificial intelligence (AI), mixed reality products and use cases, and the evolution of computing both on a macro level and from a consumer angle.

View More

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more A conceptual image of a man standing in a cloud of social media posts and messages. Artificial Intelligence Next-generation AI 'swarms' will invade social media by mimicking human behavior and harassing real users, researchers warn    Digital generated image of abstract multicoloured AI data cloud against light blue background. Artificial Intelligence ​​AI can develop 'personality' spontaneously with minimal prompting, research shows. What does that mean for how we use it?    Businessman and robot looking down against blue background - stock illustration Artificial Intelligence Scientists made AI agents ruder — and they performed better at complex reasoning tasks    A person holds a white model of a brain with their hands on either side while lines of green and red binary numbers are projected on top Artificial Intelligence 'Not how you build a digital mind': How reasoning failures are preventing AI models from achieving human-level intelligence    AI Robot Team Assistant Service and Chatbot agent or Robotic Automation, conceptual illustration. Robot head shape in background amongst others head shapes behind. Artificial Intelligence Reading AI summaries makes people more likely to buy something — despite alarming 60% hallucination rate    A woman with dark hair in a pony tail wearing a gray blazer looks at both a laptop and phone both displaying various bar charts. Health 'Rectal garlic insertion for immune support': Medical chatbots confidently give disastrously misguided advice, experts say    Latest in Artificial Intelligence A humanoid robot in orange stands on a barren sandy landscape with a large gray mushroom cloud behind them. Artificial Intelligence AI war games almost always escalate to nuclear strikes, simulation shows    Four white security cameras are mounted in a cross-shape at the top of a street pole. Artificial Intelligence AI systems are enabling mass surveillance in the US, and there is no national law that 'meaningfully limits' the use of this data    A person holds a white model of a brain with their hands on either side while lines of green and red binary numbers are projected on top Artificial Intelligence 'Not how you build a digital mind': How reasoning failures are preventing AI models from achieving human-level intelligence    Evil robot/rogue AI concept. Artificial Intelligence An experimental AI agent broke out of its testing environment and mined crypto without permission    A collation of AI-generated images made using the new system. Artificial Intelligence New AI image generator runs using 10 times fewer steps than today's best models — and it's coming to smartphones and laptops    Artificial intelligence brain with circuitry and big data. Artificial Intelligence AI hallucinations work both ways, study shows — using chatbots can amplify and reinforce our own delusions    Latest in News Artemis II's Orion hitting the Pacific Ocean. Space Exploration 'I'm at a loss for words': Artemis II mission comes home to joy and cheers after historic 10-day mission    A gray sphere in the darkness of space is seen from a white spacecraft with solar panels to the left of the image. The Moon The moon is green and brown? Why scientists are already excited about Artemis II's historic lunar photos    A photo of the Artemis I Orion capsule in the ocean after splashdown. Space Exploration There are 'reasons to be confident' about faulty Artemis II heat shield ahead of 25,000 mph reentry, space expert Ed Macaulay says    A large black-and-white whale with a tall dorsal fin swims in the shimmering gray waters in front of a sunset city skyline. Orcas 'More questions than answers': Experts baffled by Alaskan mammal-eating orcas spotted near Seattle    An illustration of a damaged strand of DNA against a black background. The damage can be shown in red. Genetics Changing 'just one DNA letter' in female mice triggers growth of male genitalia    Artemis II's Orion capsule floating down into the Pacific Ocean Space 'Welcome home, Integrity': Artemis II crew safely returned to Earth after 'bullseye landing' to cap historic moon mission    LATEST ARTICLES