I am a Senior Data Scientist at the Swiss Data Science Center (SDSC). I work on developing principled and practical algorithms for sequential decision-making, and deploying state-of-the-art machine learning techniques in challenging real-world applications. See below for some of my research highlights.

Before joining the SDSC, I completed a PostDoc with Csaba Szepesvári at the University of Alberta (supported by an SNF Early Postdoc.Mobility fellowship). I hold a PhD in machine learning that I completed under the supervision of Andreas Krause at ETH Zurich.

If you are a student looking for a thesis project, I have several projects available here and here.

News

Nov 1, 2023 Two papers accepted at NeurIPS!
Aug 21, 2023 I am starting a new position as a Sr. Data Sciencist at the Swiss Data Science Center in Zurich. Ping me if you like to catch up!
Aug 13, 2023 Our paper on Linear Partial Monitoring for Sequential Decision-Making: Algorithms, Regret Bounds and Applications, together with Tor Lattimore and Andreas Krause, got accepted for publication at JMLR.
May 1, 2023 We are restarting the reinforcement learning online seminars. Join us for exciting talks and engaging discussions!
Jan 20, 2023 Two papers accepted:
  • ICLR 2023 as notable-top-5%: '’Near-optimal Policy Identification in Active Reinforcement Learning’‘. Together with Xiang Li, Viraj Mehta, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, and Ilija Bogunovic.
  • AISTATS 2023: '’Efficient Planning in Combinatorial Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning’‘. Together with Volodymyr Tkachuk, Seyed Alireza Bakhtiari, Matej Jusup, Ilija Bogunovic, and Csaba Szepesvari.
Feb 1, 2022 I am serving as Associate Chair at ICML 2022
Aug 1, 2021 Started my PostDoc at the University of Alberta
May 17, 2021 Successfully defended my PhD thesis!
Dec 15, 2020 I received an SNF Early PostDoc.Mobility Fellowship

Research Highlights

Safe Bayesian Optimization for Particle Accelerators

Together with collaboraters at PSI and ETH Zurich, I developed safe data-driven tuning algorithms for particle accelerators. Manually adjusting machine parameters is a re-occuring and time consuming task that is required on many acceletors and cuts down valuable time for experiments. A main difficulty is that all adjustments need to respect safety parameters to avoid damaging the machines (or trigger automated shutdown procedures). We successfully deployed our methods on two major experimental facilities at PSI, the High Intensity Proton Accelerator (HIPA) and the Swiss Free Electron Laser (SwissFEL).

Frequentist Analysis of Information-Directed Sampling

In my PhD thesis, I pioneered mathematical foundations of information-directed sampling (IDS), an algorithm design principle proposed by Daniel Russo and Benjamin Van Roy. Together with Tor Lattimore and Andreas Krause, I showed that the algorithm applies much more broadly to linear partial monitoring (and is provably near-optimal in all finite-action settings). More recently, I showed that IDS is also asymptotically optimal (together with Claire Vernade, Tor Lattimore and Csaba Szepesvári). This resolves an open problem in the literatue. It is also a remarkable result, because IDS was never explicitly designed for this regime.

Publications

2023

  1. Regret Minimization via Saddle Point Optimization
    Johannes Kirschner, Alireza Bakhtiari, Kushagra Chandak, Volodymyr Tkachuk, and Csaba Szepesvari
    In Proc. Neural Information Processing Systems (NeurIPS) Dec 2023
  2. Linear Partial Monitoring for Sequential Decision-Making: Algorithms, Regret Bounds and Applications
    Johannes KirschnerTor Lattimore, and Andreas Krause
    Journal of Machine Learning Research (JMLR) Dec 2023
  3. Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off
    Zichen Zhang, Johannes Kirschner, Junxi Zhang, Francesco Zanini, Alex Ayoub, Masood Dehghan, and Dale Schuurmans
    In Proc. Neural Information Processing Systems (NeurIPS) Dec 2023
  4. Near-optimal Policy Identification in Active Reinforcement Learning
    Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, and Ilija Bogunovic
    Accepted at ICLR (notable-top-5%) May 2023
  5. Efficient Planning in Combinatorial Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning
    Volodymyr Tkachuk, Seyed Alireza Bakhtiari, Johannes Kirschner, Matej Jusup, Ilija Bogunovic, and Csaba Szepesvari
    Accepted at AISTATS Apr 2023

2022

  1. Tuning particle accelerators with safety constraints using Bayesian optimization
    Johannes Kirschner, Mojmir Mutný, Andreas Krause, Jaime Portugal, Nicole Hiller, and Jochem Snuverink
    Phys. Rev. Accel. Beams Jun 2022

2021

  1. Information-Directed Sampling — Frequentist Analysis and Applications
    Johannes Kirschner
    Jun 2021
  2. Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
    Marc Jourdan, Mojmír Mutný, Johannes Kirschner, and Andreas Krause
    In Algorithmic Learning Theory Jun 2021
  3. Bias-Robust Bayesian Optimization via Dueling Bandits
    Johannes Kirschner, and Andreas Krause
    In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS) Jul 2021
  4. Asymptotically Optimal Information-Directed Sampling
    Johannes KirschnerTor LattimoreClaire Vernade, and Csaba Szepesvári
    In Proc. International Conference on Learning Theory (COLT) Aug 2021

2020

  1. Distributionally Robust Bayesian Optimization
    Johannes KirschnerIlija BogunovicStefanie Jegelka, and Andreas Krause
    In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS) Aug 2020
  2. Information Directed Sampling for Linear Partial Monitoring
    Johannes KirschnerTor Lattimore, and Andreas Krause
    In Proc. International Conference on Learning Theory (COLT) Jul 2020
  3. Experimental Design for Optimization of Orthogonal Projection Pursuit Models
    Mojmir Mutný, Johannes Kirschner, and Andreas Krause
    In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI) Feb 2020

2019

  1. Bayesian Optimization for Fast and Safe Parameter Tuning of SwissFEL
    Johannes Kirschner, Manuel Nonnenmacher, Mojmir Mutný, Nicole Hiller, Andreas Adelmann, Rasmus Ischebeck, and Andreas Krause
    In Proc. International Free-Electron Laser Conference (FEL2019) Jun 2019
  2. Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces
    Johannes Kirschner, Mojmir Mutný, Nicole Hiller, Rasmus Ischebeck, and Andreas Krause
    In Proc. International Conference for Machine Learning (ICML) Jun 2019
  3. Information-Directed Exploration for Deep Reinforcement Learning
    Nikolay Nikolov, Johannes KirschnerFelix Berkenkamp, and Andreas Krause
    In Proc. International Conference on Learning Representations (ICLR) May 2019
  4. Stochastic Bandits with Context Distributions
    Johannes Kirschner, and Andreas Krause
    In Proc. Neural Information Processing Systems (NeurIPS) Dec 2019

2018

  1. Information Directed Sampling and Bandits with Heteroscedastic Noise
    Johannes Kirschner, and Andreas Krause
    In Proc. International Conference on Learning Theory (COLT) Jul 2018