AI ToolsCompareDiscountsBlogNewsSubmitWrite Review

Top Categories

ProductivityAudio & VideoDevelopmentAutomation & IntegrationAI InfrastructureMarketing
View All

Top Tags

Workflow AutomationAI AgentsAutomating TasksDevelopersDocument AnalysisText Generation
View All
LogoAIDIRECTORY
NewsWrite Review
Submit
Join the Community

Create a free account to bookmark tools, write reviews, and get personalized updates.

hi@aidirectory.com
Browse:AI ToolsCategoriesTagsCompareDiscountsBlogNewsLiveDocs
Quick Links:Write ReviewSubmit ToolAboutAdvertisePoliciesTerms of ServicePrivacy Policy

© 2026, AIDIRECTORY. All rights reserved.

AIDIRECTORY is a discovery platform that aggregates information about AI tools and software from publicly available sources. All tool listings, descriptions, and comparisons are for informational purposes only and do not constitute endorsement or recommendation.

References made to third-party names, logos, and trademarks on this website are to identify corresponding products. Unless otherwise specified, the trademark holders are not affiliated with AIDIRECTORY, our products, or website, and they do not sponsor or endorse AIDIRECTORY services. Such references are included strictly as nominative fair use under applicable trademark law and remain fully the property of their respective trademark holders.

Ad
Favicon of Your brand hereYour brand here — This ad space has better conversion rates than your landing page.
Advertise on AIDIRECTORY
/News/Study finds popular chatbots show an us vs them bias in social advice

Study finds popular chatbots show an us vs them bias in social advice

University of Vermont researchers found several AI chatbots use more positive language for “us” groups and more negative language for “them” groups.

About 2 hours ago•Ethics & Safety

In short: A new University of Vermont study says several well known AI chatbots tend to favor “ingroup” groups and speak more negatively about “outgroup” groups when discussing social situations.

What happened

Researchers at the University of Vermont’s Computational Story Lab and Computational Ethics Lab tested several large language models, including GPT-4.1, DeepSeek-3.1, Gemma-2.0, Grok-3.0, and LLaMA-3.1. Large language models are chatbots trained on huge amounts of text from the internet and other sources.

The team found a consistent “us vs them” pattern. In simple terms, when the prompts framed one group as “us,” the models gave warmer, more supportive feedback, and when the prompts framed a group as “them,” the models used colder or more negative language.

When the researchers used prompts that targeted specific groups, negative language toward “outgroups” rose by about 1.19% to 21.76%, depending on the model and prompt. The effect got stronger when the model was asked to answer as a persona, like a conservative or liberal identity, which is like telling the chatbot to “role play” a viewpoint.

The researchers argue this comes from the models’ training data. If the text they learn from often praises some groups and criticizes others, the model can absorb those attitudes, not just basic facts.

Why it matters

People increasingly use chatbots for relationship and social advice. If a chatbot quietly “takes sides” based on group labels, it can reinforce stereotypes or make disagreements worse, especially in sensitive areas like hiring, policing, healthcare, or the justice system.

The team also proposed a mitigation method called ION, which is a way of retraining the model to better follow preferences for neutral language (like coaching it with examples). They report it reduced the gap between ingroup and outgroup sentiment by up to 69%.

Source: NYTimes

Ad
Favicon

 

  
 

Share:

Ad
Favicon of Your brand hereYour brand here — First mover advantage looks good on you. Claim this spot now.
Advertise on AIDIRECTORY
Popular Categories:
Productivity & Workflow

87

Audio & Video Production

87

Software Development

72

Automation & Workflow

64

AI Infrastructure & MLOps

50

Marketing & Growth

48

Data & Analytics

40

Voice & Speech

48

Sales & Outreach

36

Customer Support

33

Design & Creative

38

Operations & Admin

27

Writing & Content Creation

48

Photography & Imaging

40

Research & Analysis

34


Popular Tags:
Workflow Automation

351

AI Agents

263

Automating Tasks

196

Developers

143

Document Analysis

127

Text Generation

142

Content Creators

148

Operations Managers

106

Small Business Owners

104

Marketers

115

Summarization

101

Forms & Docs

90

Data Analysis

75

Agency Teams

91

Sales Teams

66

Ad
Favicon of PromptmonitorPromptmonitor
How often does AI recommend your brand to customers?
Fix That Now
Favicon of Promptmonitor