2 papers accepted to ICML’21

Our papers on nullspace property for subspace-preserving recovery[1], and on convergence and implicit bias of overparametrized linear networks [2] have been accepted to the International Conference of Machine Learning (ICML), 2021.

[1] M. D. Kaba, C. You, D. R. Robinson, E. Mallada, and R. Vidal, “A Nullspace Property for Subspace-Preserving Recovery,” in International Conference of Machine Learning (ICML), 2021, pp. 1-9.
[Bibtex] [Abstract] [Download PDF]

Much of the theory for classical sparse recovery is based on conditions on the dictionary that are both necessary and sufficient (e.g., nullspace property) or only sufficient (e.g., incoherence and restricted isometry). In contrast, much of the theory for subspace-preserving recovery, the theoretical underpinnings for sparse subspace classification and clustering methods, is based on conditions on the subspaces and the data that are only sufficient (e.g., subspace incoherence and data inner-radius). This paper derives a necessary and sufficient condition for subspace-preserving recovery that is inspired by the classical nullspace property. Based on this novel condition, called here subspace nullspace property, we derive equivalent characterizations that either admit a clear geometric interpretation, relating data distribution and subspace separation to the recovery success, or can be verified using a finite set of extreme points of a properly defined set. We further exploit these characterizations to derive new sufficient conditions, based on inner-radius and outer-radius measures and dual bounds, that generalize existing conditions, while preserving the geometric interpretations. These results fill an important gap in the subspace-preserving recovery literature

@inproceedings{kcrmv2021icml,
  abstract = {Much of the theory for classical sparse recovery is based on conditions on the dictionary that are both necessary and sufficient (e.g., nullspace property) or only sufficient (e.g., incoherence and restricted isometry). In contrast, much of the theory for subspace-preserving recovery, the theoretical underpinnings for sparse subspace classification and clustering methods, is based on conditions on the subspaces and the data that are only sufficient (e.g., subspace incoherence and data inner-radius). This paper derives a necessary and sufficient condition for subspace-preserving recovery that is inspired by the classical nullspace property.
Based on this novel condition, called here subspace nullspace property, we derive equivalent characterizations that either admit a clear geometric interpretation, relating data distribution and subspace separation to the recovery success, or can be verified using a finite set of extreme points of a properly defined set. We further exploit these characterizations to derive new sufficient conditions, based on inner-radius and outer-radius measures and dual bounds, that generalize existing conditions, while preserving the geometric interpretations. These results fill an important gap in the subspace-preserving recovery literature},
  author = {Kaba, Mustafa Devrim and You, Chong and Robinson, Daniel R. and Mallada, Enrique and Vidal, Rene},
  booktitle = {International Conference of Machine Learning (ICML)},
  month = {5},
  note = {(21.5$%$ acceptance)},
  pages = {1-9},
  pubstate = {accepted},
  title = {A Nullspace Property for Subspace-Preserving Recovery},
  url = {https://mallada.ece.jhu.edu/pubs/2021-ICML-KCRMV.pdf},
  year = {2021}
}
[2] H. Min, S. Tarmoun, R. Vidal, and E. Mallada, “On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks,” in International Conference of Machine Learning (ICML), 2021, pp. 1-9.
[Bibtex] [Abstract] [Download PDF]

Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is to study how initialization and overparametrization affect convergence and implicit bias of training algorithms. In this paper, we present a novel analysis of single-hidden-layer linear networks trained under gradient flow, which connects initialization, optimization, and overparametrization. Firstly, we show that the squared loss converges exponentially to its optimum at a rate that depends on the level of imbalance of the initialization. Secondly, we show that proper initialization constrains the dynamics of the network parameters to lie within an invariant set. In turn, minimizing the loss over this set leads to the min-norm solution. Finally, we show that large hidden layer width, together with (properly scaled) random initialization, ensures proximity to such an invariant set during training, allowing us to derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.

@inproceedings{mtvm2021icml,
  abstract = {Neural networks trained via gradient descent with random initialization and without any regularization enjoy good generalization performance in practice despite being highly overparametrized. A promising direction to explain this phenomenon is to study how initialization and overparametrization affect convergence and implicit bias of training algorithms. In this paper, we present a novel analysis of single-hidden-layer linear networks trained under gradient flow, which connects initialization, optimization, and overparametrization. Firstly, we show that the squared loss converges exponentially to its optimum at a rate that depends on the level of imbalance of the initialization. Secondly, we show that proper initialization constrains the dynamics of the network parameters to lie within an invariant set. In turn, minimizing the loss over this set leads to the min-norm solution. Finally, we show that large hidden layer width, together with (properly scaled) random initialization, ensures proximity to such an invariant set during training, allowing us to derive a novel non-asymptotic upper-bound on the distance between the trained network and the min-norm solution.        },
  author = {Min, Hancheng and Tarmoun, Salma and Vidal, Rene and Mallada, Enrique},
  booktitle = {International Conference of Machine Learning (ICML)},
  month = {5},
  note = {(21.5$%$ acceptance)},
  pages = {1-9},
  pubstate = {accepted},
  title = {On the Explicit Role of Initialization on the Convergence and Implicit Bias of Overparametrized Linear Networks},
  url = {https://mallada.ece.jhu.edu/pubs/2021-ICML-MTVM.pdf},
  year = {2021}
}