- AI · arxiv/cs.LG · 8 min
Log-odds aggregation handles unknown state spaces in forecast combining
Chen, Peng, and Tang propose a closed-form aggregator for combining expert forecasts when the underlying outcome range is unknown, achieving tighter regret bounds than prior methods.
April 28, 2026 Read → → - AI · arxiv/cs.LG · 8 min
Poisoning attacks on recommender systems gain potency through worst-case modeling
Researchers propose SharpAP, a method that optimizes fake user injection attacks by targeting worst-case model structures, improving cross-system transferability.
April 27, 2026 Read → → - AI · arxiv/cs.AI · 8 min
Testing POMDP Policies Against Sensor Drift and Model Mismatch
New framework quantifies how much observation noise a decision policy can tolerate before performance collapses, with polynomial-time algorithms for real systems.
April 26, 2026 Read → → - AI · arxiv/cs.AI · 8 min
Trust-weighted SSL improves aerial image learning under corruption
Additive-residual trust weights boost self-supervised learning robustness when aerial images degrade, outperforming standard contrastive methods on benchmark datasets.
April 24, 2026 Read → → - AI · arxiv/cs.AI · 8 min
Supervised Learning Has Built-In Geometric Blindness
Mathematical proof shows empirical risk minimization must preserve sensitivity to label-correlated but test-irrelevant features—a structural constraint, not a training bug.
April 24, 2026 Read → → - AI · arxiv/cs.LG · 8 min
Simpler Optimizers Make LLM Unlearning More Robust
Research shows that using lower-order optimization methods during LLM unlearning produces forgetting that resists post-training attacks better than sophisticated gradient-based approaches.
April 21, 2026 Read → → - AI · arxiv/cs.LG · 4 min
Weak Labels Fail Across Time Even When Domain Transfer Works
A study of CRISPR experiments reveals supervision drift—where the labeling mechanism itself shifts—causes model collapse in temporal transfer despite strong in-domain performance.
April 21, 2026 Read → → - AI · arxiv/cs.LG · 3 min
Transformer models outperform CNNs in prostate MRI segmentation
SwinUNETR achieves 5-point Dice improvement over standard UNet when trained on mixed-reader datasets, suggesting transformer attention handles annotation variability better.
April 17, 2026 Read → →