3 papers accepted to CDC

Our papers on Dissipative Gradient Descent-Ascent algorithms for min-max problems [1], Accelerated saddle flow dynamics [2], and invertibility of sparse linear dynamical systems [3] have been accepted to the 63rd IEEE Conference on Decision and Control. Congrats to the leading students Tianqi, Yingzhu, Kyle, and to all the rest of the team!

[1] T. Zheng, N. Loizou, P. You, and E. Mallada, “Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization,” in 63rd IEEE Conference on Decision and Control (CDC), 2024.
[Bibtex] [Abstract] [Download PDF]

Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings. To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA. Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA’s effectiveness in solving saddle point problems.

@inproceedings{zlym2024cdc,
  abstract = {Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings.
To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA.
Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA's effectiveness in solving saddle point problems.},
  author = {Zheng, Tianqi and Loizou, Nicolas and You, Pengcheng and Mallada, Enrique},
  booktitle = {63rd IEEE Conference on Decision and Control (CDC)},
  grants = {CPS-2136324, Global-Centers-2330450},
  month = {07},
  note = {also in L-CSS},
  pubstate = {accepted, submitted Mar 2024},
  title = {Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization},
  url = {https://mallada.ece.jhu.edu/pubs/2024-LCSS-ZLYM.pdf},
  year = {2024}
}
[2] Y. Liu, E. Mallada, Z. Li, and P. You, “Accelerated Saddle Flow Dynamics for Bilinearly Coupled Minimax Problems,” in 63rd IEEE Conference on Decision and Control (CDC), 2024.
[Bibtex] [Abstract] [Download PDF]

Minimax problems have attracted much attention due to various applications in constrained optimization problems and zero-sum games. Identifying saddle points within these problems is crucial, and saddle flow dynamics offer a straightforward yet useful approach. This study focuses on a class of bilinearly coupled minimax problems and designs an accelerated algorithm based on saddle flow dynamics that achieves a convergence rate beyond stereotype limits. The algorithm is derived based on a sequential two-step transformation of the objective function. First, a change of variable is aimed at a better-conditioned saddle function. Second, a proximal regularization, when staggered with the first step, guarantees strong convexity-strong concavity of the objective function that can be tuned for accelerated exponential convergence. Besides, such an approach can be extended to a class of weakly convex-weakly concave functions and still achieve exponential convergence to one stationary point. The theory is verified by a numerical test on an affine equality-constrained convex optimization problem.

@inproceedings{lmly2024cdc,
  abstract = {Minimax problems have attracted much attention due to various applications in constrained optimization problems and zero-sum games. Identifying saddle points within these problems is crucial, and saddle flow dynamics offer a straightforward yet useful approach. This study focuses on a class of bilinearly coupled minimax problems and designs an accelerated algorithm based on saddle flow dynamics that achieves a convergence rate beyond stereotype limits.  The algorithm is derived based on a sequential two-step transformation of the objective function. First, a change of variable is aimed at a better-conditioned saddle function. Second, a proximal regularization, when staggered with the first step, guarantees strong convexity-strong concavity of the objective function that can be tuned for accelerated exponential convergence.
Besides, such an approach can be extended to a class of weakly convex-weakly concave functions and still achieve exponential convergence to one stationary point. The theory is verified by a numerical test on an affine equality-constrained convex optimization problem.},
  author = {Liu, Yingzhu and Mallada, Enrique and Li, Zhongkui and You, Pengcheng},
  booktitle = {63rd IEEE Conference on Decision and Control (CDC)},
  grants = {CPS-2136324, Global-Centers-2330450},
  month = {07},
  pubstate = {accepted, submitted Mar 2024},
  title = {Accelerated Saddle Flow Dynamics for Bilinearly Coupled Minimax Problems},
  url = {https://mallada.ece.jhu.edu/pubs/2024-Preprint-LMLY.pdf},
  year = {2024}
}
[3] K. Poe, E. Mallada, and R. Vidal, “Invertibility of Discrete-Time Linear Systems with Sparse Inputs,” in 63rd IEEE Conference on Decision and Control (CDC), 2024.
[Bibtex] [Abstract] [Download PDF]

One of the fundamental problems of interest for discrete-time linear systems is whether its input sequence may be recovered given its output sequence, a.k.a. the left inversion problem. Many conditions on the state space geometry, dynamics, and spectral structure of a system have been used to characterize the well-posedness of this problem, without assumptions on the inputs. However, certain structural assumptions, such as input sparsity, have been shown to translate to practical gains in the performance of inversion algorithms, surpassing classical guarantees. Establishing necessary and sufficient conditions for left invertibility of systems with sparse inputs is therefore a crucial step toward understanding the performance limits of system inversion under structured input assumptions. In this work, we provide the first necessary and sufficient characterizations of left invertibility for linear systems with sparse inputs, echoing classic characterizations for standard linear systems. The key insight in deriving these results is in establishing the existence of two novel geometric invariants unique to the sparse-input setting, the weakly unobservable and strongly reachable subspace arrangements. By means of a concrete example, we demonstrate the utility of these characterizations. We conclude by discussing extensions and applications of this framework to several related problems in sparse control.

@inproceedings{pmv2024cdc,
  abstract = {One of the fundamental problems of interest for discrete-time linear systems is whether its input sequence may be recovered given its output sequence, a.k.a. the left inversion problem. Many conditions on the state space geometry, dynamics, and spectral structure of a system have been used to characterize the well-posedness of this problem, without assumptions on the inputs. However, certain structural assumptions, such as input sparsity, have been shown to translate to practical gains in the performance of inversion algorithms, surpassing classical guarantees. Establishing necessary and sufficient conditions for left invertibility of systems with sparse inputs is therefore a crucial step toward understanding the performance limits of system inversion under structured input assumptions. In this work, we provide the first necessary and sufficient characterizations of left invertibility for linear systems with sparse inputs,  echoing classic characterizations for standard linear systems. The key insight in deriving these results is in establishing the existence of two novel geometric invariants unique to the sparse-input setting, the weakly unobservable and strongly reachable subspace arrangements. By means of a concrete example, we demonstrate the utility of these characterizations. We conclude by discussing extensions and applications of this framework to several related problems in sparse control.},
  author = {Poe, Kyle and Mallada, Enrique and Vidal, Rene},
  booktitle = {63rd IEEE Conference on Decision and Control (CDC)},
  grants = {CPS-2136324, Global-Centers-2330450},
  month = {07},
  pubstate = {accepted, submitted Mar 2024},
  title = {Invertibility of Discrete-Time Linear Systems with Sparse Inputs},
  url = {https://mallada.ece.jhu.edu/pubs/2024-Preprint-PMV.pdf},
  year = {2024}
}

1 paper accepted to L-CSS

Our paper on Dissipative Gradient Descent-Ascent algorithms for min-max problems [1] has been accepted to the IEEE Control Systems Letters. Congrats Tianqi!

[1] [doi] T. Zheng, N. Loizou, P. You, and E. Mallada, “Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization,” IEEE Control Systems Letters (L-CSS), vol. 8, pp. 2009-2014, 2024.
[Bibtex] [Abstract] [Download PDF]

Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings. To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA. Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA’s effectiveness in solving saddle point problems.

@article{zlym2024lcss,
  abstract = {Gradient Descent Ascent (GDA) methods for min-max optimization problems typically produce oscillatory behavior that can lead to instability, e.g., in bilinear settings.
To address this problem, we introduce a dissipation term into the GDA updates to dampen these oscillations. The proposed Dissipative GDA (DGDA) method can be seen as performing standard GDA on a state-augmented and regularized saddle function that does not strictly introduce additional convexity/concavity. We theoretically show the linear convergence of DGDA in the bilinear and strongly convex-strongly concave settings and assess its performance by comparing DGDA with other methods such as GDA, Extra-Gradient (EG), and Optimistic GDA.
Our findings demonstrate that DGDA surpasses these methods, achieving superior convergence rates. We support our claims with two numerical examples that showcase DGDA's effectiveness in solving saddle point problems.},
  author = {Zheng, Tianqi and Loizou, Nicolas and You, Pengcheng and Mallada, Enrique},
  doi = {10.1109/LCSYS.2024.3413004},
  grants = {CPS-2136324, Global-Centers-2330450},
  journal = {IEEE Control Systems Letters (L-CSS)},
  month = {06},
  pages = {2009-2014},
  record = {published, accepted May 2024, submitted Mar 2024},
  title = {Dissipative Gradient Descent Ascent Method: A Control Theory Inspired Algorithm for Min-max Optimization},
  url = {https://mallada.ece.jhu.edu/pubs/2024-LCSS-ZLYM.pdf},
  volume = {8},
  year = {2024}
}

1 paper accepted to ICLR

Our paper on Early Neuron Alignment in Two-layer ReLU Networks [1] has been accepted to the International Conference on Representation Learning. Congrats Hancheng!

[1] H. Min, R. Vidal, and E. Mallada, “Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization,” in International Conference on Representation Learning (ICLR), 2024.
[Bibtex] [Abstract] [Download PDF]

This paper studies the problem of training a two-layer ReLU network for binary classification using gradient flow with small initialization. We consider a training dataset with well-separated input vectors: Any pair of input data with the same label are positively correlated, and any pair with different labels are negatively correlated. Our analysis shows that, during the early phase of training, neurons in the first layer try to align with either the positive data or the negative data, depending on its corresponding weight on the second layer. A careful analysis of the neurons’ directional dynamics allows us to provide an $\mathcalO(\fracłog n\sqrtμ)$ upper bound on the time it takes for all neurons to achieve good alignment with the input data, where $n$ is the number of data points and $μ$ measures how well the data are separated. After the early alignment phase, the loss converges to zero at a $\mathcalO(\frac1t)$ rate, and the weight matrix on the first layer is approximately low-rank. Numerical experiments on the MNIST dataset illustrate our theoretical findings.

@inproceedings{mvm2024iclr,
  abstract = {This paper studies the problem of training a two-layer ReLU network for binary classification using gradient flow with small initialization. We consider a training dataset with well-separated input vectors: Any pair of input data with the same label are positively correlated, and any pair with different labels are negatively correlated. Our analysis shows that, during the early phase of training, neurons in the first layer try to align with either the positive data or the negative data, depending on its corresponding weight on the second layer. A careful analysis of the neurons' directional dynamics allows us to provide an $\mathcalO(\fracłog n\sqrtμ)$ upper bound on the time it takes for all neurons to achieve good alignment with the input data, where $n$ is the number of data points and $μ$ measures how well the data are separated. After the early alignment phase, the loss converges to zero at a $\mathcalO(\frac1t)$ rate, and the weight matrix on the first layer is approximately low-rank. Numerical experiments on the MNIST dataset illustrate our theoretical findings.},
  author = {Min, Hancheng and Vidal, Rene and Mallada, Enrique},
  booktitle = {International Conference on Representation Learning (ICLR)},
  grants = {CAREER-1752362},
  month = {05},
  record = {published, accepted Jan 2024, submitted Sep 2023},
  title = {Early Neuron Alignment in Two-layer ReLU Networks with Small Initialization},
  url = {https://mallada.ece.jhu.edu/pubs/2024-ICLR-MVM.pdf},
  year = {2024}
}

1 paper accepted to IEEE TEMPR

Our paper on market power mitigation in two-stage markets [1] has been accepted to Transactions on Energy Markets, Policy, and Regulation!

[1] [doi] R. K. Bansal, Y. Chen, P. You, and E. Mallada, “Market Power Mitigation in Two-stage Electricity Market with Supply Function and Quantity Bidding,” IEEE Transactions on Energy Markets, Policy and Regulation, vol. 1, iss. 4, pp. 512-522, 2023.
[Bibtex] [Abstract] [Download PDF]

The main goal of a sequential two-stage electricity market—e.g., day-ahead and real-time markets—is to operate efficiently. However, the price difference across stages due to inadequate competition and unforeseen circumstances leads to undesirable price manipulation. To mitigate this, some Inde- pendent System Operators (ISOs) proposed system-level market power mitigation (MPM) policies in addition to existing local policies. These policies aim to substitute noncompetitive bids with a default bid based on estimated generator costs. However, these policies may lead to unintended consequences when implemented without accounting for the conflicting interest of participants. In this paper, we model the competition between generators (bidding supply functions) and loads (bidding quantity) in a two-stage market with a stage-wise MPM policy. An equilibrium analysis shows that a real-time MPM policy leads to equilibrium loss, meaning no stable market outcome (Nash equilibrium) exists. A day-ahead MPM policy, besides, leads to a Stackelberg-Nash game with loads acting as leaders and generators as followers. In this setting, loads become winners, i.e., their aggregate payment is always less than competitive payments. Moreover, comparison with standard market equilibrium highlights that markets are better off without such policies. Finally, numerical studies highlight the impact of heterogeneity and load size on market equilibrium.

@article{bcym2023tempr,
  abstract = {The main goal of a sequential two-stage electricity market---e.g., day-ahead and real-time markets---is to operate efficiently. However, the price difference across stages due to inadequate competition and unforeseen circumstances leads to undesirable price manipulation. To mitigate this, some Inde- pendent System Operators (ISOs) proposed system-level market power mitigation (MPM) policies in addition to existing local policies. These policies aim to substitute noncompetitive bids with a default bid based on estimated generator costs. However, these policies may lead to unintended consequences when implemented without accounting for the conflicting interest of participants. In this paper, we model the competition between generators (bidding supply functions) and loads (bidding quantity) in a two-stage market with a stage-wise MPM policy. An equilibrium analysis shows that a real-time MPM policy leads to equilibrium loss, meaning no stable market outcome (Nash equilibrium) exists. A day-ahead MPM policy, besides, leads to a Stackelberg-Nash game with loads acting as leaders and generators as followers. In this setting, loads become winners, i.e., their aggregate payment is always less than competitive payments. Moreover, comparison with standard market equilibrium highlights that markets are better off without such policies. Finally, numerical studies highlight the impact of heterogeneity and load size on market equilibrium.},
  author = {Bansal, Rajni Kant and Chen, Yue and You, Pengcheng and Mallada, Enrique},
  doi = {10.1109/TEMPR.2023.3318149},
  grants = {CAREER-1752362, CPS-2136324, EPICS-2330450},
  journal = {IEEE Transactions on Energy Markets, Policy and Regulation},
  month = {12},
  number = {4},
  pages = {512-522},
  record = {published, online Sep 2023, revised July 2023, under revision May 2023, submitted Jan 2023},
  title = {Market Power Mitigation in Two-stage Electricity Market with Supply Function and Quantity Bidding},
  url = {https://mallada.ece.jhu.edu/pubs/2023-TEMPR-BCYM.pdf},
  volume = {1},
  year = {2023}
}

Tianqi defended his dissertation

Tianqi Zheng, an ECE Ph.D. student in our lab, defended his dissertation entitled “Online decision-making for dynamical systems: Model-based and data-driven approaches” on Tuesday, September 5th. Congratulations!

Rajni defended his dissertation

Rajni Kant Bansal, a MechE Ph.D. student in our lab, defended his dissertation entitled “Efficiency and Market Power in Electricity Markets with Inelastic Demand, Energy Storage, and Hybrid Energy Resources” on Wednesday, August 30th. Congratulations!

Hancheng defended his dissertation

Hancheng Min, an ECE Ph.D. student in our lab, defended his dissertation entitled “Exploiting Structural Properties in the Analysis of High-dimensional Dynamical Systems” on Friday, July 21st. Congratulations!

2 papers accpeted to CDC

Our paper on non-monotonic Lyapunov functions [1] and our paper on simultaneous state and sparse input recovery [2] have been accepted to the Conference on Decision and Control 2023!

[1] [doi] R. Siegelmann, Y. Shen, F. Paganini, and E. Mallada, “A Recurrence-based Direct Method for Stability Analysis and GPU-based Verification of Non-monotonic Lyapunov Functions,” in 62nd IEEE Conference on Decision and Control (CDC), 2023, pp. 6665-6672.
[Bibtex] [Abstract] [Download PDF]

Lyapunov direct method is a powerful tool that provides a rigorous framework for stability analysis and control design for dynamical systems. A critical step that enables the application of the method is the existence of a Lyapunov function $V$—a function whose value monotonically decreases along the trajectories of the dynamical system. Unfortunately, finding a Lyapunov function is often tricky and requires ingenuity, domain knowledge, or significant computational power. At the core of this challenge is the fact that the method requires every sub-level set of $V$ ($V_łeq c$) to be forward invariant, thus implicitly coupling the geometry of $V_łeq c$ and the trajectories of the system. In this paper, we seek to disentangle this dependence by developing a direct method that substitutes the concept of invariance with a more flexible notion known as recurrence. A set is ($τ$-)recurrent if every trajectory that starts in the set returns to it (within $τ$ seconds) infinitely often. We show that, under mild conditions, the recurrence of level sub-level sets is sufficient to guarantee stability, asymptotic stability, and exponential stability. We further provide a GPU-based algorithm that can to verify whether $V$ satisfies such conditions up to an arbitrarily small subset of the equilibrium.

@inproceedings{sspm2023cdc,
  abstract = {Lyapunov direct method is a powerful tool that provides a rigorous framework for stability analysis and control design for dynamical systems. A critical step that enables the application of the method is the existence of a Lyapunov function $V$---a function whose value monotonically decreases along the trajectories of the dynamical system. Unfortunately, finding a Lyapunov function is often tricky and requires ingenuity, domain knowledge, or significant computational power. At the core of this challenge is the fact that the method requires every sub-level set of $V$ ($V_łeq c$) to be forward invariant, thus implicitly coupling the geometry of $V_łeq c$ and the trajectories of the system. In this paper, we seek to disentangle this dependence by developing a direct method that substitutes the concept of invariance with a more flexible notion known as recurrence. A set is ($τ$-)recurrent if every trajectory that starts in the set returns to it (within $τ$ seconds) infinitely often. We show that, under mild conditions,  the recurrence of level sub-level sets is sufficient to guarantee stability, asymptotic stability, and exponential stability. We further provide a GPU-based algorithm that can to verify whether $V$ satisfies such conditions up to an arbitrarily small subset of the equilibrium.},
  author = {Siegelmann, Roy and Shen, Yue and Paganini, Fernando and Mallada, Enrique},
  booktitle = {62nd IEEE Conference on Decision and Control (CDC)},
  doi = {10.1109/CDC49753.2023.10383373},
  grants = {CPS-2136324, CAREER-1752362, EPICS-2330450},
  month = {12},
  organization = {IEEE},
  pages = {6665--6672},
  record = {presented, accepted Jul 2023, submitted Mar 2023},
  title = {A Recurrence-based Direct Method for Stability Analysis and GPU-based Verification of Non-monotonic Lyapunov Functions},
  url = {https://mallada.ece.jhu.edu/pubs/2023-CDC-SSPM.pdf},
  year = {2023}
}
[2] [doi] K. Poe, E. Mallada, and R. Vidal, “Necessary and Sufficient Conditions for Simultaneous State and Input Recovery of Linear Systems with Sparse Inputs by $\ell_1$-Minimization,” in 62nd IEEE Conference on Decision and Control (CDC), 2023, pp. 6499-6506.
[Bibtex] [Abstract] [Download PDF]

The study of theoretical conditions for recovering sparse signals from compressive measurements has received a lot of attention in the research community. In parallel, there has been a great amount of work characterizing conditions for the recovery both the state and the input to a linear dynamical system (LDS), including a handful of results on recovering sparse inputs. However, existing sufficient conditions for recovering sparse inputs to an LDS are conservative and hard to interpret, while necessary and sufficient conditions have not yet appeared in the literature. In this work, we provide (1) the first characterization of necessary and sufficient conditions for the existence and uniqueness of sparse inputs to an LDS, (2) the first necessary and sufficient conditions for a linear program to recover both an unknown initial state and a sparse input, and (3) simple, interpretable recovery conditions in terms of the LDS parameters. We conclude with a numerical validation of these claims and discuss implications and future directions.

@inproceedings{pmv2023cdc,
  abstract = {The study of theoretical conditions for recovering sparse signals from compressive measurements has received a lot of attention in the research community. In parallel, there has been a great amount of work characterizing conditions for the recovery both the state and the input to a linear dynamical system (LDS), including a handful of results on recovering sparse inputs. However, existing sufficient conditions for recovering sparse inputs to an LDS are conservative and hard to interpret, while necessary and sufficient conditions have not yet appeared in the literature. In this work, we provide (1) the first characterization of necessary and sufficient conditions for the existence and uniqueness of sparse inputs to an LDS, (2) the first necessary and sufficient conditions for a linear program to recover both an unknown initial state and a sparse input, and (3) simple, interpretable recovery conditions in terms of the LDS parameters. We conclude with a numerical validation of these claims and discuss implications and future directions.
},
  author = {Poe, Kyle and Mallada, Enrique and Vidal, Rene},
  booktitle = {62nd IEEE Conference on Decision and Control (CDC)},
  doi = {10.1109/CDC49753.2023.10383682},
  grants = {CPS-2136324,CAREER-1752362},
  month = {12},
  pages = {6499--6506},
  record = {presented, accepted Jul 2023, submitted Mar 2023},
  title = {Necessary and Sufficient Conditions for Simultaneous State and Input Recovery of Linear Systems with Sparse Inputs by $\ell_1$-Minimization},
  url = {https://mallada.ece.jhu.edu/pubs/2023-Preprint-PMV.pdf},
  year = {2023}
}