- AI · hackernoon · 2 min
Spam Filters Built the Foundation for Adversarial ML
Early inbox battles between spammers and filters created the first real-world adversarial machine learning laboratory, shaping defensive AI research.
April 29, 2026 Read → → - AI · arxiv/cs.AI · 8 min
Poisoned Pretraining: Hidden Attacks Embedded in LLM Training Data
Researchers demonstrate how adversaries can plant dormant malicious logic in large language models by seeding poisoned content across obscure websites, evading detection until triggered.
April 27, 2026 Read → → - AI · arxiv/cs.LG · 8 min
Poisoning attacks on recommender systems gain potency through worst-case modeling
Researchers propose SharpAP, a method that optimizes fake user injection attacks by targeting worst-case model structures, improving cross-system transferability.
April 27, 2026 Read → →