- Yapay Zeka · arxiv/cs.LG · 8 dk
Log-odds aggregation handles unknown state spaces in forecast combining
Chen, Peng, and Tang propose a closed-form aggregator for combining expert forecasts when the underlying outcome range is unknown, achieving tighter regret bounds than prior methods.
28 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 8 dk
Poisoning attacks on recommender systems gain potency through worst-case modeling
Researchers propose SharpAP, a method that optimizes fake user injection attacks by targeting worst-case model structures, improving cross-system transferability.
27 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Testing POMDP Policies Against Sensor Drift and Model Mismatch
New framework quantifies how much observation noise a decision policy can tolerate before performance collapses, with polynomial-time algorithms for real systems.
26 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Trust-weighted SSL improves aerial image learning under corruption
Additive-residual trust weights boost self-supervised learning robustness when aerial images degrade, outperforming standard contrastive methods on benchmark datasets.
24 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Supervised Learning Has Built-In Geometric Blindness
Mathematical proof shows empirical risk minimization must preserve sensitivity to label-correlated but test-irrelevant features—a structural constraint, not a training bug.
24 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 8 dk
Simpler Optimizers Make LLM Unlearning More Robust
Research shows that using lower-order optimization methods during LLM unlearning produces forgetting that resists post-training attacks better than sophisticated gradient-based approaches.
21 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 4 dk
Weak Labels Fail Across Time Even When Domain Transfer Works
A study of CRISPR experiments reveals supervision drift—where the labeling mechanism itself shifts—causes model collapse in temporal transfer despite strong in-domain performance.
21 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 3 dk
Transformer models outperform CNNs in prostate MRI segmentation
SwinUNETR achieves 5-point Dice improvement over standard UNet when trained on mixed-reader datasets, suggesting transformer attention handles annotation variability better.
17 Nisan 2026 Oku → →