← İçerik
Yapay Zeka · 8 dk okuma · 25 Nisan 2026

Statistical Certification Framework for AI Risk Regulation

Researchers propose a two-stage verification method to quantify acceptable risk thresholds and audit AI system failure rates without model access.

Kaynak: arxiv/cs.AI · Natan Levy, Gadi Perl · orijinali aç ↗ ↗
Paylaş: X LinkedIn

A statistical framework uses aviation-style certification to measure and bound AI failure rates for regulatory compliance.

  • Regulators mandate AI safety but lack quantitative definitions of acceptable risk or verification methods.
  • RoMA and gRoMA tools compute upper bounds on system failure probability without accessing model internals.
  • Framework fixes acceptable failure probability and operational domain as normative regulatory acts.
  • Approach scales to any AI architecture and produces auditable, legally defensible certificates.
  • Shifts accountability to developers by requiring pre-deployment quantitative safety evidence.
  • Integrates with existing EU AI Act and NIST Risk Management Framework requirements.
  • Black-box verification enables oversight of opaque statistical systems resistant to white-box analysis.

Sık sorulanlar

  • Acceptable risk is defined as a specific failure probability (δ) set by a regulatory authority for a given operational domain (ε). The framework does not define what δ should be; instead, it provides a method to verify that a deployed system's true failure rate stays below that threshold. The choice of δ is a normative regulatory decision, not a technical one.

İlgili