Send me an email at

I am a PostDoc working with Csaba Szepesvári at the University of Alberta. In my research, I focus on developing principled and practical algorithms for sequential decision making. My PostDoc is supported by an Early Postdoc.Mobility fellowship of the Swiss National Science Foundation.

I hold a PhD in machine learning that I completed under the supervision of Andreas Krause at ETH Zurich. In 2020, I completed a research internship at DeepMind. Prior, I received Bachelor and Master of Science degrees in Mathematics at ETH Zurich.

My research interests include topics related to reinforcement learning, adaptive experimental design, Bayesian optimization, robustness and safety. I am excited about proving strong theoretical guarantees for algorithms and understanding the fundamental limits of what is possible. At the same time, my research is often inspired by challenging real-world applications. See below for some of my research highlights.


Jan 20, 2023 Two papers accepted:
  • ICLR 2023 as notable-top-5%: '’Near-optimal Policy Identification in Active Reinforcement Learning’‘. Together with Xiang Li, Viraj Mehta, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, and Ilija Bogunovic.
  • AISTATS 2023: '’Efficient Planning in Combinatorial Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning’‘. Together with Volodymyr Tkachuk, Seyed Alireza Bakhtiari, Matej Jusup, Ilija Bogunovic, and Csaba Szepesvari.
Jan 1, 2023 :sparkles: I am on the job market in 2023. :sparkles:
Feb 1, 2022 I am serving as Associate Chair at ICML 2022
Aug 1, 2021 Started my PostDoc at the University of Alberta
May 17, 2021 Successfully defended my PhD thesis!
Dec 15, 2020 I received an SNF Early PostDoc.Mobility Fellowship

Research Highlights

Safe Bayesian Optimization for Particle Accelerators

Together with collaboraters at PSI and ETH Zurich, we developed data-driven and safe tuning algorithms for particle accelerators. Manually adjusting machine parameters is a re-occuring and time consuming task that is required on many acceletors and cuts down valuable time for experiments. A main difficulty is that all adjustments need to respect safety parameters to avoid damaging the machines (or trigger automated shutdown procedures). We successfully deployed our methods on two major experimental facilities at PSI, the High Intensity Proton Accelerator (HIPA) and the Swiss Free Electron Laser (SwissFEL).

Frequentist Analysis of Information-Directed Sampling

In my thesis, I introduced a novel (frequentist) analysis of information-directed sampling (IDS), an algorithm first proposed by Daniel Russo and Benjamin Van Roy at NeurIPS 2014. Together with Tor Lattimore and Andreas Krause, we showed that the algorithm applies much more broadly to linear partial monitoring (and in fact is near-optimal in many cases). More recently, we proved that IDS is also asymptotically optimal (together with Claire Vernade, Tor Lattimore and Csaba Szepesvári). This is a very surprising result, because IDS was never explicitly designed for this regime.



  1. Linear Partial Monitoring for Sequential Decision-Making: Algorithms, Regret Bounds and Applications
    Johannes KirschnerTor Lattimore, and Andreas Krause
    arXiv preprint 2023
  2. Efficient Planning in Combinatorial Action Spaces with Applications to Cooperative Multi-Agent Reinforcement Learning
    Volodymyr Tkachuk, Seyed Alireza Bakhtiari, Johannes Kirschner, Matej Jusup, Ilija Bogunovic, and Csaba Szepesvari
    Accepted at AISTATS 2023
  3. Near-optimal Policy Identification in Active Reinforcement Learning
    Xiang Li, Viraj Mehta, Johannes Kirschner, Ian Char, Willie Neiswanger, Jeff Schneider, Andreas Krause, and Ilija Bogunovic
    Accepted at ICLR (notable-top-5%) 2023


  1. Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off
    Zichen Zhang, Johannes Kirschner, Junxi Zhang, Francesco Zanini, Alex Ayoub, Masood Dehghan, and Dale Schuurmans
    arXiv preprint 2022
  2. Tuning particle accelerators with safety constraints using Bayesian optimization
    Johannes Kirschner, Mojmir Mutný, Andreas Krause, Jaime Portugal, Nicole Hiller, and Jochem Snuverink
    Phys. Rev. Accel. Beams Jun 2022


  1. Information-Directed Sampling — Frequentist Analysis and Applications
    Johannes Kirschner
    Jun 2021
  2. Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit Feedback
    Marc Jourdan, Mojmír Mutný, Johannes Kirschner, and Andreas Krause
    In Algorithmic Learning Theory Jun 2021
  3. Bias-Robust Bayesian Optimization via Dueling Bandits
    Johannes Kirschner, and Andreas Krause
    In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS) Jul 2021
  4. Asymptotically Optimal Information-Directed Sampling
    Johannes KirschnerTor LattimoreClaire Vernade, and Csaba Szepesvári
    In Proc. International Conference on Learning Theory (COLT) Aug 2021


  1. Distributionally Robust Bayesian Optimization
    Johannes KirschnerIlija BogunovicStefanie Jegelka, and Andreas Krause
    In Proc. International Conference on Artificial Intelligence and Statistics (AISTATS) Aug 2020
  2. Information Directed Sampling for Linear Partial Monitoring
    Johannes KirschnerTor Lattimore, and Andreas Krause
    In Proc. International Conference on Learning Theory (COLT) Jul 2020
  3. Experimental Design for Optimization of Orthogonal Projection Pursuit Models
    Mojmir Mutný, Johannes Kirschner, and Andreas Krause
    In Proceedings of the 34th AAAI Conference on Artificial Intelligence (AAAI) Feb 2020


  1. Bayesian Optimization for Fast and Safe Parameter Tuning of SwissFEL
    Johannes Kirschner, Manuel Nonnenmacher, Mojmir Mutný, Nicole Hiller, Andreas Adelmann, Rasmus Ischebeck, and Andreas Krause
    In Proc. International Free-Electron Laser Conference (FEL2019) Jun 2019
  2. Adaptive and Safe Bayesian Optimization in High Dimensions via One-Dimensional Subspaces
    Johannes Kirschner, Mojmir Mutný, Nicole Hiller, Rasmus Ischebeck, and Andreas Krause
    In Proc. International Conference for Machine Learning (ICML) Jun 2019
  3. Information-Directed Exploration for Deep Reinforcement Learning
    Nikolay Nikolov, Johannes KirschnerFelix Berkenkamp, and Andreas Krause
    In Proc. International Conference on Learning Representations (ICLR) May 2019
  4. Stochastic Bandits with Context Distributions
    Johannes Kirschner, and Andreas Krause
    In Proc. Neural Information Processing Systems (NeurIPS) Dec 2019


  1. Information Directed Sampling and Bandits with Heteroscedastic Noise
    Johannes Kirschner, and Andreas Krause
    In Proc. International Conference on Learning Theory (COLT) Jul 2018