ML Seminar @ JHU

I gave a talk on “Learning to be safe, in finite time: Multi-armed Bandits and Reinforcement Learning” at ML Seminar, Johns Hopkins University (Host: Raman Arora). Related publications include [1, 2]

[1] [doi] A. Castellano, J. Bazerque, and E. Mallada, “Learning to be safe, in finite time,” in American Control Conference (ACC), 2021, pp. 909-916.
[Bibtex] [Abstract] [Download PDF]

This paper aims to put forward the concept that learning to take safe actions in unknown environments, even with probability one guarantees, can be achieved without the need for an unbounded number of exploratory trials, provided that one is willing to relax its optimality requirements mildly. We focus on the canonical multi-armed bandit problem and seek to study the exploration-preservation trade-off intrinsic within safe learning. More precisely, by defining a handicap metric that counts the number of unsafe actions, we provide an algorithm for discarding unsafe machines (or actions), with probability one, that achieves constant handicap. Our algorithm is rooted in the classical sequential probability ratio test, redefined here for continuing tasks. Under standard assumptions on sufficient exploration, our rule provably detects all unsafe machines in an (expected) finite number of rounds. The analysis also unveils a trade-off between the number of rounds needed to secure the environment and the probability of discarding safe machines. Our decision rule can wrap around any other algorithm to optimize a specific auxiliary goal since it provides a safe environment to search for (approximately) optimal policies. Simulations corroborate our theoretical findings and further illustrate the aforementioned trade-offs.

@inproceedings{cbm2021acc,
  abstract = {This paper aims to put forward the concept that learning to take safe actions in unknown environments, even with probability one guarantees, can be achieved without the need for an unbounded number of exploratory trials, provided that one is willing to relax its optimality requirements mildly. We focus on the canonical multi-armed bandit problem and seek to study the exploration-preservation trade-off intrinsic within safe learning. More precisely, by defining a handicap metric that counts the number of unsafe actions, we provide an algorithm for discarding unsafe machines (or actions), with probability one, that achieves constant handicap.
Our algorithm is rooted in the classical sequential probability ratio test, redefined here for continuing tasks.  Under standard assumptions on sufficient exploration, our rule provably detects all unsafe machines in an (expected) finite number of rounds. The analysis also unveils a trade-off between the number of rounds needed to secure the environment and the probability of discarding safe machines. Our decision rule can wrap around any other algorithm to optimize a specific auxiliary goal since it provides a safe environment to search for (approximately) optimal policies. Simulations corroborate our theoretical findings and further illustrate the aforementioned trade-offs.},
  author = {Castellano, Agustin and Bazerque, Juan and Mallada, Enrique},
  booktitle = {American Control Conference (ACC)},
  doi = {10.23919/ACC50511.2021.9482829},
  grants = {CPS-1544771, CAREER-1752362, TRIPODS-1934979},
  month = {5},
  pages = {909-916},
  record = {submitted Sep. 2020, accepted Jan. 2021},
  title = {Learning to be safe, in finite time},
  url = {https://mallada.ece.jhu.edu/pubs/2021-ACC-CBM.pdf},
  year = {2021}
}
[2] Unknown bibtex entry with key [cmbm2021preprint]
[Bibtex]