← İçerik
Yapay Zeka · 5 dk okuma · 23 Nisan 2026

Transformers learn graph connectivity selectively, not universally

New research shows transformers can infer transitive relations on grid-structured graphs but fail on fragmented ones, with scaling helping only certain architectures.

Kaynak: arxiv/cs.AI · Amit Roy, Abulhair Saparov · orijinali aç ↗ ↗
Paylaş: X LinkedIn

Transformers learn to infer transitive relations on grid-like graphs but struggle with disconnected graph structures.

  • Transformers can infer connectivity on grid-structured directed graphs where nodes embed in low-dimensional space.
  • Graph dimensionality predicts learning difficulty; higher-dimensional grids challenge transformers more than low-dimensional ones.
  • Larger models generalize better to connectivity inference on grid graphs as scale increases.
  • Transformers fail to learn connectivity when graphs contain many disconnected components.
  • Transitive reasoning ability depends on graph topology, not just model capacity or training data volume.
  • Prior work tested in-context learning of transitivity; this study examines learning from training examples.
  • Results suggest transformers rely on geometric structure rather than abstract logical rules for reasoning.

Sık sorulanlar

  • Transformers can learn transitive reasoning on grid-structured graphs where nodes embed naturally in low-dimensional space. However, they struggle when graphs contain many disconnected components. Success depends on graph topology, not just model size. Scaling helps on grid graphs but doesn't fix failures on fragmented structures.

İlgili