
Abstract: Stochastic Model Predictive Control (SMPC) uses stochastic dynamical models to formulate an optimization problem, solving it at each time step to determine the optimal action via the standard receding horizon approach. In practice, however, the available stochastic model and associated probability distribution are not perfect. Discrepancies between this estimated distribution and the "true" distribution, referred to as distributional uncertainty, are inevitable and can impact controller performance. In this talk, we will discuss two approaches to address distributional uncertainty: (i) Ensuring that SMPC is inherently robust to such uncertainty, and (ii) explicitly incorporating distributional uncertainty into the algorithm formulation via Distributionally Robust Model Predictive Control (DRMPC). For both approaches, I will present closed-loop performance guarantees and simple examples to illustrate these guarantees.

Abstract: We consider the problem of enforcing safety-critical constraints with a desired probability for discrete-time stochastic systems subject to possibly unbounded noise. To this end, we introduce a framework to design computationally tractable stochastic model predictive control (MPC) schemes, that provides strong closed loop guarantees. Probabilistic constraints are enforced with a deterministic prediction of a probabilistic reachable set (PRS). Performance guarantees are obtained by optimizing the expected cost (or a certainty-equivalent approximation thereof) conditioned on the current measured state. Recursive feasibility is ensured deterministically, independent of the possibly large noise realization through the indirect-feedback paradigm. The resulting MPC formulations can be implemented with a computational demand comparable to a nominal (deterministic) MPC. We also discuss the computation of probabilistic reachable sets for different dynamics (linear, uncertain linear, nonlinear) and noise distributions (known distribution, sub-Gaussian, known covariance), and provide different numerical examples.

Abstract: Gaussian Processes (GPs) have been shown to provide substantial performance gains in Model Predictive Control (MPC) by improving prediction accuracy. Despite promising practical demonstrations, their broader adoption has been hindered by high computational demands and the absence of rigorous safety guarantees. This talk will first highlight some of our recent advances in developing tailored solvers for GP-MPC. The second part will present formulations that ensure (probabilistic) constraint satisfaction while remaining practical in terms of both computational effort and conservativeness. The results will be illustrated through experiments in miniature autonomous racing.

Abstract: Reinforcement Learning–based Model Predictive Control (RLMPC) combines the data-driven policy learning capabilities of reinforcement learning with the structured repeated-planning policy optimization of MPC, enabling high-performance decision-making for stochastic systems. The RLMPC framework builds on a strong theoretical foundation and introduces a decision-oriented approach to modeling Markov decision processes that is unlike the classical prediction-oriented approach. This perspective has broad implications for sequential decision-making and is especially relevant for AI-based decision-making. Recent algorithmic advances and software developments have made RLMPC applicable to real-world problems. However, most existing approaches rely on online RL, which is unsuitable for safety-critical applications. An important future direction is offline RLMPC, where the MPC policy is trained entirely on previously collected data and periodically updated as new batches of data are available. While the theoretical foundations for offline RLMPC are largely in place, algorithm development is lacking. Another promising direction is in-context RLMPC, where contextual information is integrated into MPC, enabling an easier transfer of MPC policies across problems. High-dimensional context information can be mapped into a compact latent representation that can parametrize the MPC scheme, making it applicable to decision making in problems with complex contexts. Risk-handling is another aspect that is of practical interest. Probabilistic risk constraints can be handled through direct gradient-based optimization using policy-gradient methods despite their non-separable structure. This can be used to develop RLMPC schemes for learning risk-aware policies. The talk will outline these developments and highlight exciting future research directions.

Abstract: In this talk, we introduce and study different dissipativity notions and different turnpike properties for discrete-time stochastic nonlinear optimal control problems. The proposed stochastic dissipativity notions extend the classic notion of Jan C. Willems to L^r random variables and to probability measures. Our stochastic turnpike properties range from a formulation for random variables via turnpike phenomena in probability and in probability measures to the turnpike property for the moments. Moreover, we investigate how different metrics (such as Wasserstein or Lévy-Prokhorov) can be leveraged in the analysis. Our results are built upon stationarity concepts in distribution and in random variables and on the formulation of the stochastic optimal control problem as a finite-horizon Markov decision process. We investigate how the proposed dissipativity notions connect to the various stochastic turnpike properties and we work out the link between different forms of dissipativity.

Abstract: Model Predictive Control (MPC) is well understood in the deterministic setting, yet rigorous stability and performance guarantees for stochastic MPC remain limited to the consideration of terminal constraints and penalties. In contrast, in this talk we analyze stochastic economic MPC with an expected cost criterion and establishes closed-loop guarantees without terminal conditions. Relying on stochastic dissipativity and turnpike properties, we construct closed-loop Lyapunov functions that ensure P-practical asymptotic stability of a particular optimal stationary process under different notions of stochastic convergence, such as in distribution or in the p-th mean. In addition, we derive tight near-optimal bounds for both averaged and non-averaged performance, thereby extending classical deterministic results to the stochastic domain. Finally, we show that the abstract stochastic MPC scheme requiring distributional knowledge shares the same closed-loop properties as a practically implementable algorithm based only on sampled state information, ensuring applicability of our findings.