← İçerik
Yapay Zeka · 5 dk okuma · 1 Mayıs 2026

Self-Evolving Skills Let Language Models Learn From Long Context

Ctx2Skill uses multi-agent loops to automatically extract and refine skills from dense context without human annotation or external feedback.

Kaynak: arxiv/cs.AI · Shuzheng Si, Haozhe Zhao, Yu Lei, Qingyi Wang, Dingwei Chen, Zhitong Wang, Zhenhailong Wang, Kangyang Luo, Zheng Wang, Gang Chen, Fanchao Qi, Minjia Zhang, Maosong Sun · orijinali aç ↗ ↗
Paylaş: X LinkedIn

A framework autonomously discovers and refines natural-language skills from complex context to improve language model reasoning without manual supervision.

  • Language models struggle with reasoning over long, dense contexts beyond their training knowledge.
  • Manual skill annotation is expensive; automated skill construction lacks feedback signals.
  • Ctx2Skill uses three agents: Challenger generates probing tasks, Reasoner solves them, Judge provides binary feedback.
  • Proposer and Generator agents analyze failures and synthesize skill updates for both Challenger and Reasoner.
  • Cross-time Replay mechanism prevents adversarial collapse by selecting balanced skill sets.
  • Extracted skills plug into any language model to enhance context learning performance.
  • Tested on CL-bench tasks; shows consistent improvement across different backbone models.

Sık sorulanlar

  • Context learning means a language model must reason over information provided in the input (e.g., a long document or technical specification) that is not in its training data. Models struggle because they lack explicit procedures to extract and apply rules from dense, unfamiliar contexts. Ctx2Skill addresses this by automatically discovering skills—natural-language procedures—that guide the model to reason over new context correctly.

İlgili