An Elementary Proof of the Near Optimality of LogSumExp Smoothing
Published in ArXiv Preprint, 2025
The paper proves a sharp limitation on smoothing the max-of-coordinates function in \(d\) dimensions. Any convex surrogate with the desired smoothness must incur a worst-case error that grows like \(\log d\), so the standard LogSumExp smoothing is essentially optimal up to constants. This result is proved using elementary inequalities about smooth, convex functions. The paper also proves that in small dimensions (\(d = 2, 3\)) LogSumExp fails to be optimal.
