← Content
AI · 4 min read · April 20, 2026

Quantum-LSTM hybrid cuts physics model training data by 100×

Federated learning with quantum-enhanced LSTM achieves classical accuracy on SUSY classification using 20K samples instead of 2M, with under 300 parameters.

Source: arxiv/cs.LG · Abhishek Sawaika, Durga Pritam Suggisetti, Udaya Parampalli, Rajkumar Buyya · open original ↗ ↗
Share: X LinkedIn

Hybrid quantum-classical LSTM in federated setup matches classical deep learning on high-energy physics tasks with 100× fewer data points.

  • Combines quantum variational circuits with LSTM to capture complex feature relationships and temporal correlations.
  • Federated architecture distributes training across nodes, reducing computational burden on individual NISQ devices.
  • Achieves ±1% accuracy parity with classical benchmarks on SUSY dataset using only 20K samples versus 2M baseline.
  • Model footprint under 300 parameters enables deployment on resource-constrained distributed infrastructure.
  • Outperforms standalone variational quantum circuit approaches in both accuracy and data efficiency.
  • Addresses practical NISQ hardware limitations by avoiding reliance on single powerful quantum processor.

Frequently asked

  • Current quantum computers (NISQ devices) are noisy, expensive, and limited in qubit count. Federated learning distributes the quantum computation across multiple smaller nodes and classical servers, reducing the burden on any single quantum processor. This makes the approach practical for organizations with distributed data and limited quantum hardware access.

Related