AI · 5 min read · April 27, 2026
Fast Entropic Approximations cut entropy computation by 37x
Horenko et al. propose non-singular rational approximations of Shannon entropy and KL divergence that preserve mathematical properties while reducing computation cost and improving ML model training.
Source: arxiv/cs.AI · Illia Horenko, Davide Bassetti, Luk\'a\v{s} Posp\'i\v{s}il · open original ↗ ↗
Rational approximations of entropy measures reduce computation cost 2–37× while preserving mathematical properties and eliminating gradient singularities.
- — Fast Entropic Approximations (FEA) replace Shannon entropy and KL divergence with non-singular rational functions.
- — FEA requires 5–7 elementary operations versus tens for standard logarithm-based schemes.
- — Mean absolute error around 10⁻³, 10–20× better than existing approximation methods.
- — Non-singular gradients improve robustness and convergence speed in optimization.
- — On feature selection benchmarks, FEA trains models 1000× faster than LASSO with better quality.
- — Mathematical properties of original measures (symmetry, convexity) are preserved in approximations.
- — Applicable to physics, information theory, machine learning, and quantum computing workflows.
Frequently asked
- Fast Entropic Approximation (FEA) replaces Shannon entropy and Kullback-Leibler divergence with rational functions that compute in 5–7 operations instead of tens, while preserving mathematical properties like symmetry and convexity. It eliminates gradient singularities near zero, which cause numerical instability in optimization. This is critical for machine learning workflows that compute entropy millions of times during training.