← Content
AI · 5 min read · April 29, 2026

MotionBricks: Real-Time Motion Generation at 15,000 FPS

A modular generative framework scales motion synthesis to production speeds while supporting multi-modal control without requiring animation expertise.

Source: arxiv/cs.LG · Tingwu Wang, Olivier Dionne, Michael De Ruyter, David Minor, Davis Rempe, Kaifeng Zhao, Mathis Petrovich, Ye Yuan, Chenran Li, Zhengyi Luo, Brian Robison, Xavier Blackwell, Bernardo Antoniazzi, Xue Bin Peng, Yuke Zhu, Simon Yuen · open original ↗ ↗
Share: X LinkedIn

MotionBricks generates diverse character motions in real-time by combining modular latent models with smart primitives for intuitive control.

  • Modular latent backbone trains on 350,000+ motion clips in a single model.
  • Achieves 15,000 FPS throughput with 2ms latency on production hardware.
  • Smart primitives unify navigation and object interaction without animation expertise.
  • Supports multi-modal control: velocity commands, style selection, keyframe precision.
  • Tested on humanoid robot and animation pipelines for generalization proof.
  • Outperforms existing text/tag-driven models on quality and scalability metrics.
  • Plug-and-play assembly model reduces integration friction in game/film pipelines.

Frequently asked

  • MotionBricks uses a modular latent backbone designed specifically for real-time inference, not just post-hoc optimization. It trains a single model on 350,000+ motion clips, avoiding the overhead of multiple specialized models. Smart primitives reduce the need for complex control logic, cutting latency further. The result is 15,000 FPS throughput with 2ms latency—fast enough for interactive games and live robot control.

Related