1 paper accepted to Allerton

Our paper on generalized Barrier function conditions [1] has been accepted to the 60th Allerton Conference. Congrats to Yue for leading this work!

[1] T. Zheng, N. Loizou, P. You, and E. Mallada, “Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization,” in 63rd IEEE Conference on Decision and Control (CDC), 2024.
[Bibtex] [Abstract] [Download PDF]

Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings. To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA. Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA’s effectiveness in solving saddle point problems.

@inproceedings{zlym2024cdc,
  abstract = {Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings.
To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA.
Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA's effectiveness in solving saddle point problems.},
  author = {Zheng, Tianqi and Loizou, Nicolas and You, Pengcheng and Mallada, Enrique},
  booktitle = {63rd IEEE Conference on Decision and Control (CDC)},
  grants = {CPS-2136324, Global-Centers-2330450},
  month = {07},
  note = {also in L-CSS},
  pubstate = {accepted, submitted Mar 2024},
  title = {Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization},
  url = {https://mallada.ece.jhu.edu/pubs/2024-LCSS-ZLYM.pdf},
  year = {2024}
}