← İçerik
Yapay Zeka · 8 dk okuma · 26 Nisan 2026

Meta-predicates enforce evidence rules in clinical AI before deployment

A framework using domain-specific languages and epistemological type systems validates that clinical decision logic uses appropriate evidence sources, not just accurate predictions.

Kaynak: arxiv/cs.AI · Michael Bouzinier, Sergey Trifonov, Michael Chumack, Eugenia Lvova, Dmitry Etin · orijinali aç ↗ ↗
Paylaş: X LinkedIn

Meta-predicates constrain what evidence types clinical AI rules may use, enabling pre-deployment validation of epistemological soundness.

  • Meta-predicates assert constraints on evidence types before rules execute, catching errors early.
  • Epistemological type system classifies evidence by purpose, domain, scale, and acquisition method.
  • Decision trees reformulated as unate cascades produce per-variant audit trails showing which rule fired.
  • Approach complements post-hoc explainability (LIME, SHAP) with preventive validation.
  • Demonstrated on 5.6M genetic variants; generalizes to any auditable decision domain.
  • Satisfies regulatory requirements for auditability under EU AI Act and FDA guidance.
  • Works with both human-written and AI-generated rules in AnFiSA platform.

Sık sorulanlar

  • A meta-predicate is a rule about rules. It asserts constraints on what types of evidence a clinical decision rule is allowed to use. For example, a meta-predicate might say 'diagnostic rules must use validated biomarkers, not anecdotal reports.' Meta-predicates catch epistemological errors before a system goes live, complementing post-hoc explanation tools.

İlgili