← Content
AI · 8 min read · April 28, 2026

Log-odds aggregation handles unknown state spaces in forecast combining

Chen, Peng, and Tang propose a closed-form aggregator for combining expert forecasts when the underlying outcome range is unknown, achieving tighter regret bounds than prior methods.

Source: arxiv/cs.LG · Zhi Chen, Cheng Peng, Wei Tang · open original ↗ ↗
Share: X LinkedIn

A log-odds pooling rule aggregates expert forecasts robustly even when the true outcome space and joint information structure remain hidden.

  • Standard forecast aggregation assumes known binary outcomes {0, 1}; this work allows unknown arbitrary values in [0, 1].
  • Log-odds aggregator linearly pools forecasts in logit space, yielding closed-form solution with explicit regret guarantees.
  • Under conditionally independent signals, unknown state space increases worst-case regret from <0.0226 to 0.0255.
  • Regret bounds derived for three regimes: conditionally independent, Blackwell-ordered, and general information structures.
  • When expert marginal distributions are known, generalized log-odds rule achieves regret of 0.0228 with matching lower bound.
  • First explicit aggregator achieving regret strictly below 0.0226 in classical binary setting.

Frequently asked

  • Log-odds aggregation pools expert forecasts in logit (log-odds) space rather than probability space. This approach is robust when the true outcome range is unknown or varies across environments. The paper shows it achieves tighter worst-case regret bounds than simpler averaging methods, especially when experts' signals are conditionally independent.

Related