- AI · arxiv/cs.AI · 4 min
Transformer agents embed four systematic biases into recommendations
Attention mechanisms in AI recommenders amplify recency, popularity, and synthetic data effects, creating reliability risks invisible to standard metrics.
May 1, 2026 Read → → - AI · hackernoon · 2 min
HackerNoon's 221-Post Index Maps the AI Ethics Landscape
A ranked reading list drawn from reader engagement data surfaces which AI ethics topics practitioners actually find worth their time.
April 26, 2026 Read → → - AI · arxiv/cs.AI · 6 min
LLM Safety Filters Fail Differently Across Dialects and Explicit Identity
Research shows language models refuse requests more often when users state their identity explicitly, but bypass safety guardrails when using dialect signals like AAVE.
April 24, 2026 Read → → - AI · arxiv/cs.AI · 8 min
Fairness in sequential ML requires accounting for unequal uncertainty
Lee et al. show how model, feedback, and prediction uncertainty compound disadvantage in online decision systems, and propose uncertainty-aware methods to reduce disparities.
April 24, 2026 Read → → - AI · arxiv/cs.AI · 8 min
AI Bias in Code Decisions: Prompt Wording Shifts Model Choices
Researchers find that small phrasing changes in prompts push AI systems toward poor software engineering decisions, and standard prompt techniques don't fix it.
April 23, 2026 Read → → - AI · arxiv/cs.AI · 8 min
LLMs show human-like trust bias toward people, with demographic blind spots
Study of 43,200 experiments reveals language models develop trust patterns similar to humans, including susceptibility to age, religion, and gender bias in financial decisions.
April 17, 2026 Read → → - AI · arxiv/cs.AI · 6 min
Measuring Where Chatbots Beat Humans on Tests
Researchers apply psychometric methods to identify test items where LLMs systematically outperform human learners, revealing assessment vulnerabilities.
April 17, 2026 Read → →