← İçerik
Yapay Zeka · 8 dk okuma · 2 Mayıs 2026

AI Governance Fails When Capabilities and Rules Don't Align

McCann argues that most AI systems have mismatched boundaries between what they can do and what governance covers, creating inevitable blind spots.

Kaynak: arxiv/cs.AI · Alan L. McCann · orijinali aç ↗ ↗
Paylaş: X LinkedIn

AI governance structurally fails when the boundary of system capabilities diverges from the boundary of governance rules.

  • Every AI system has two independent boundaries: expressiveness (what it can do) and governance (what rules cover).
  • Misalignment creates three regions: governed capabilities (safe), ungoverned capabilities (risk), and rules addressing non-existent capabilities (theater).
  • Rice's theorem proves no algorithm can decide whether arbitrary programs comply with behavioral governance policies.
  • Coterminous governance requires architectural separation of computation from effects, not post-hoc governance layers.
  • Structural governance integrates checks into the execution pipeline rather than running as a separate monitoring system.
  • Current deployed systems treat governance and expressiveness as independent design choices, guaranteeing failure modes.
  • The framework distinguishes effect governance (actions in the world) from output governance (content quality, bias).

Sık sorulanlar

  • Coterminous governance is a system property where the boundary of what an AI system can do (expressiveness) exactly matches the boundary of what governance rules cover. McCann argues this requires architectural separation of computation from effects, so governance checks are built into the execution pipeline. Without this alignment, ungoverned capabilities and ineffective rules are structurally inevitable.

İlgili