An Elementary Proof of the Near Optimality of LogSumExp Smoothing

Published in ArXiv Preprint, 2025

The paper studies how well one can approximate the max-of-coordinates function in \(d\) dimensions under the \(\ell_\infty\) geometry (the setting that matches common “softmax-style” smoothings). The standard choice is the LogSumExp function \(f(x) = \log(\sum_{i=1}^{d}e^{x_i})\), which achieves worst-case error on the order of \(\log d\).

The main result is an elementary, constructive lower bound showing this \(\log d\) dependence is unavoidable: even the best-designed convex smoothing must differ from max by at least a constant multiple of \(\log d\), so LogSumExp is near-optimal up to constants.