Yapay Zeka · 8 dk okuma · 24 Nisan 2026
Fairness in sequential ML requires accounting for unequal uncertainty
Lee et al. show how model, feedback, and prediction uncertainty compound disadvantage in online decision systems, and propose uncertainty-aware methods to reduce disparities.
Kaynak: arxiv/cs.AI · Michelle Seng Ah Lee, Kirtan Padh, David Watson, Niki Kilbertus, Jatinder Singh · orijinali aç ↗ ↗
Uncertainty in sequential decision-making systems distributes unevenly across groups, amplifying historical exclusion; accounting for it is necessary for fair outcomes.
- — Three uncertainty types—model, feedback, prediction—each harm disadvantaged groups differently in online ML.
- — Unobserved counterfactuals (e.g., denied loan repayment) and sparse data on marginalized populations compound exclusion.
- — Selective feedback loops mean systems learn less about underrepresented groups, worsening future decisions.
- — Ignoring uncertainty creates compounding harms: reduced access, unrealized gains for subjects, unrealized losses for institutions.
- — Uncertainty-aware exploration can reduce outcome variance for disadvantaged groups without sacrificing institutional objectives.
- — Fairness audits must diagnose whether uncertainty or incidental noise drives disparities.
- — Framework enables practitioners to govern fairness risks in real-world sequential decision systems.
Sık sorulanlar
- Bias is systematic preference for one outcome over another; uncertainty is lack of information. A system can be unbiased in intent but unfair in practice if it has less data on a group. Lee et al. argue that fairness requires addressing both. Uncertainty-aware fairness means actively reducing information gaps, not just removing statistical correlations.