← İçerik
Yapay Zeka · 8 dk okuma · 23 Nisan 2026

AI Bias in Code Decisions: Prompt Wording Shifts Model Choices

Researchers find that small phrasing changes in prompts push AI systems toward poor software engineering decisions, and standard prompt techniques don't fix it.

Kaynak: arxiv/cs.AI · Francesco Sovrano, Gabriele Dominici, Alberto Bacchelli · orijinali aç ↗ ↗
Paylaş: X LinkedIn

Prompt wording alone shifts AI decisions in software tasks; standard techniques fail, but explicit best-practice injection reduces bias by 51%.

  • Biased phrasing (anchors, framing, popularity hints) changes AI outputs without altering the underlying problem logic.
  • Chain-of-thought and self-debiasing prompts show no statistically significant bias reduction in practice.
  • Eight SE-relevant biases tested: anchoring, availability, bandwagon, confirmation, framing, hindsight, hyperbolic discounting, overconfidence.
  • PROBE-SWE benchmark pairs biased and unbiased versions of the same SE dilemmas to isolate wording effects.
  • Explicit elicitation of SE best practices and axiomatic reasoning cues reduce overall bias sensitivity by 51%.
  • Linguistic patterns in prompts correlate with heightened bias; certain phrasings make AI less reliable for decisions.
  • Standard cost-effective models tested; no off-the-shelf prompt engineering technique consistently mitigates bias.
  • Method requires surfacing implicit assumptions before answering, not just reformulating the question.

Sık sorulanlar

  • No, according to this research. Standard chain-of-thought and self-debiasing techniques showed no statistically significant reduction in bias sensitivity when tested on cost-effective AI models for software engineering tasks. The study found that explicit elicitation of best practices and axiomatic reasoning—not reformulated prompts—was needed to reduce bias by 51%.

İlgili