← İçerik
Yapay Zeka · 8 dk okuma · 27 Nisan 2026

Poisoning attacks on recommender systems gain potency through worst-case modeling

Researchers propose SharpAP, a method that optimizes fake user injection attacks by targeting worst-case model structures, improving cross-system transferability.

Kaynak: arxiv/cs.LG · Junsong Xie, Yonghui Yang, Pengyang Shao, Le Wu · orijinali aç ↗ ↗
Paylaş: X LinkedIn

SharpAP improves fake-profile attacks on recommender systems by optimizing against worst-case victim models rather than fixed surrogates.

  • Existing poisoning attacks assume fake data crafted for one model transfers to others; this assumption breaks under structural differences.
  • SharpAP uses sharpness-aware minimization to identify approximate worst-case victim models during the attack process.
  • The method formulates attack as a tri-level optimization problem: minimize poisoning effect, maximize victim model loss, minimize surrogate loss.
  • Poisoned data optimized for worst-case models shows reduced sensitivity to model architecture shifts.
  • Experiments on three real-world datasets show SharpAP significantly increases attack success across diverse recommender architectures.
  • Attackers typically lack knowledge of deployed victim systems, forcing reliance on surrogate models as proxies.
  • Overfitting to a single surrogate model degrades attack performance when the actual victim uses different structures.

Sık sorulanlar

  • Sharpness-aware poisoning (SharpAP) optimizes fake user profiles not just for a single surrogate model, but for a worst-case victim model identified through sharpness-aware minimization. Standard attacks assume poisoned data crafted for one model transfers directly to others. SharpAP improves transferability by making poisoning robust to structural differences between models, reducing overfitting to the surrogate.

İlgili