- Yapay Zeka · arxiv/cs.LG · 8 dk
Mixed Precision Training Stabilizes Neural ODEs
Researchers demonstrate a framework that reduces memory use by 50% and speeds up neural ODE training 2x by carefully mixing low and high precision arithmetic.
3 Mayıs 2026 Oku → → - Yapay Zeka · arxiv/cs.LG · 4 dk
Selective-Update RNNs Match Transformers While Using Less Memory
A new RNN architecture learns when to update internal state, preserving memory across long sequences and reducing computational waste on redundant input.
3 Mayıs 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 8 dk
Schema-Grounded Memory Outperforms Search-Based AI Recall
Treating AI memory as a structured database rather than a retrieval problem improves accuracy and reliability for production agents.
1 Mayıs 2026 Oku → → - Yapay Zeka · hackernoon · 6 dk
Continuity in AI agents requires architecture, not bigger memory stores
A solo builder argues that persistent AI identity depends on scheduled cognition cycles and narrative compression, not retrieval systems.
30 Nisan 2026 Oku → → - Yapay Zeka · arxiv/cs.AI · 6 dk
OjaKV: Online Low-Rank Compression for LLM Key-Value Caches
A hybrid storage and adaptive subspace method reduces KV cache memory by compressing intermediate tokens while preserving critical anchors, compatible with FlashAttention.
20 Nisan 2026 Oku → →