
I am a PostDoc working with Csaba Szepesvári at the University of Alberta. In my research, I focus on developing principled and practical algorithms for sequential decision making. My PostDoc is supported by an Early Postdoc.Mobility fellowship of the Swiss National Science Foundation.
I hold a PhD in machine learning that I completed under the supervision of Andreas Krause at ETH Zurich. In 2020, I completed a research internship at DeepMind. Prior, I received Bachelor and Master of Science degrees in Mathematics at ETH Zurich.
My research interests include topics related to reinforcement learning, adaptive experimental design, Bayesian optimization, robustness and safety. I am excited about proving strong theoretical guarantees for algorithms and understanding the fundamental limits of what is possible. At the same time, my research is often inspired by challenging real-world applications. See below for some of my research highlights.
News
Jan 20, 2023 |
Two papers accepted:
|
---|---|
Jan 1, 2023 |
![]() ![]() |
Feb 1, 2022 | I am serving as Associate Chair at ICML 2022 |
Aug 1, 2021 | Started my PostDoc at the University of Alberta |
May 17, 2021 | Successfully defended my PhD thesis! |
Dec 15, 2020 | I received an SNF Early PostDoc.Mobility Fellowship |
Research Highlights
Safe Bayesian Optimization for Particle Accelerators
Together with collaboraters at PSI and ETH Zurich, we developed data-driven and safe tuning algorithms for particle accelerators. Manually adjusting machine parameters is a re-occuring and time consuming task that is required on many acceletors and cuts down valuable time for experiments. A main difficulty is that all adjustments need to respect safety parameters to avoid damaging the machines (or trigger automated shutdown procedures). We successfully deployed our methods on two major experimental facilities at PSI, the High Intensity Proton Accelerator (HIPA) and the Swiss Free Electron Laser (SwissFEL).
Frequentist Analysis of Information-Directed Sampling
In my thesis, I introduced a novel (frequentist) analysis of information-directed sampling (IDS), an algorithm first proposed by Daniel Russo and Benjamin Van Roy at NeurIPS 2014. Together with Tor Lattimore and Andreas Krause, we showed that the algorithm applies much more broadly to linear partial monitoring (and in fact is near-optimal in many cases). More recently, we proved that IDS is also asymptotically optimal (together with Claire Vernade, Tor Lattimore and Csaba Szepesvári). This is a very surprising result, because IDS was never explicitly designed for this regime.