- Yapay Zeka · arxiv/cs.LG · 4 dk
Synthetic Computers Enable Agent Training at Scale
Researchers create realistic digital workspaces to train AI agents on long-horizon productivity tasks, scaling from thousands to potentially billions of simulated user environments.
3 Mayıs 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 8 dk
Mixed Precision Training Stabilizes Neural ODEs
Researchers demonstrate a framework that reduces memory use by 50% and speeds up neural ODE training 2x by carefully mixing low and high precision arithmetic.
3 Mayıs 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Junk Data Degrades LLM Reasoning; Twitter Study Shows Lasting Harm
Continual training on low-quality social media text causes measurable cognitive decline in language models, with reasoning and safety capabilities dropping significantly.
23 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Token Importance in On-Policy Distillation: Entropy and Disagreement
Research identifies two regions of high-value tokens in knowledge distillation: high-entropy positions and low-entropy positions where student and teacher disagree, enabling 50–80% token reduction.
17 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 8 dk
INT4 Quantization Fails After FP32 Convergence in Predictable Phases
Post-training quantization assumes converged models are ready to compress, but INT4 quantization collapses in a three-phase pattern tied to weight updates, not learning rate decay.
17 Nisan 2026 Oku → →