Accelerated Bregman Proximal Gradient Methods for Relatively Smooth Convex Optimization

  • Filip Hanzely ,
  • Peter Richtarik ,
  • Lin Xiao

MSR-TR-2018-22 |

Published by Microsoft

Revised April 23, 2020.

Related File

We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. We investigate a triangle scaling property of the Bregman distance generated by the reference convex function and present accelerated Bregman proximal gradient (ABPG) methods that attain an $O(k^{-\gamma})$ convergence rate, where $\gamma\in(0,2]$ is the triangle scaling exponent (TSE) of the Bregman distance. For the Euclidean distance, we have $\gamma=2$ and recover the convergence rate of Nesterov’s accelerated gradient methods. For non-Euclidean Bregman distances, the TSE can be much smaller (say $\gamma\leq 1$), but we show that a relaxed definition of intrinsic TSE is always equal to 2. We exploit the intrinsic TSE to develop adaptive ABPG methods that converge much faster in practice. Although theoretical guarantees on a fast convergence rate seem to be out of reach in general, our methods obtain empirical $O(k^{-2})$ rates in numerical experiments on several applications and provide posterior numerical certificates for the fast rates.

Publication Downloads

Accelerated Bregman Proximal Gradient Methods (accbpg)

July 16, 2019

A Python package of accelerated first-order algorithms for solving relatively-smooth convex optimization problems. It implements all algorithms described in our recent paper on accelerated Bregman proximal gradient methods, including the baseline algorithms for comparison. It also contains examples for three different applications: D-optimal experiment design problem, Poisson linear inverse problem and relative-entropy regression.