- Yapay Zeka · arxiv/cs.LG · 8 dk
Simpler Optimizers Make LLM Unlearning More Robust
Research shows that using lower-order optimization methods during LLM unlearning produces forgetting that resists post-training attacks better than sophisticated gradient-based approaches.
21 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 5 dk
Verifiable model unlearning on edge devices without retraining
ZK-APEX combines sparse masking and zero-knowledge proofs to let providers verify that personalized models forget targeted data while preserving local utility.
17 Nisan 2026 Oku → →