← İçerik
Yapay Zeka · 5 dk okuma · 17 Nisan 2026

Verifiable model unlearning on edge devices without retraining

ZK-APEX combines sparse masking and zero-knowledge proofs to let providers verify that personalized models forget targeted data while preserving local utility.

Kaynak: arxiv/cs.AI · Mohammad M Maheri, Sunil Cotterill, Alex Davidson, Hamed Haddadi · orijinali aç ↗ ↗
Paylaş: X LinkedIn

ZK-APEX enables providers to verify personalized model unlearning on edge devices using zero-knowledge proofs without accessing private data.

  • Personalized models on edge devices resist deletion requests; providers cannot verify compliance without seeing parameters or data.
  • ZK-APEX applies sparse masking on provider side and Group OBS compensation on client side using blockwise Fisher matrix.
  • Halo2 zero-knowledge proofs allow verification that unlearning occurred without revealing private data or personalized weights.
  • Vision Transformer tasks recover nearly all personalization accuracy; OPT125M code model recovers ~70% of original accuracy.
  • Proof generation completes in ~2 hours, 10 million times faster than retraining-based verification, using <1GB memory.
  • Framework addresses real deployment scenario where clients may ignore or falsely claim compliance with deletion requests.
  • Verification remains lightweight on edge devices, critical for practical adoption in distributed ML systems.

Sık sorulanlar

  • Machine unlearning removes the influence of specific data points from a trained model to satisfy privacy or copyright requests. Verification is hard because providers cannot access edge device parameters or private data, yet must confirm that the targeted information was actually forgotten. Traditional retraining-based checks are slow and expensive, making lightweight cryptographic verification essential.

İlgili