Active Learning for Systems and Control (ALeSCo) - Data Informativity, Uncertainty, and Guarantees

In the ALeSCo Research Unit (FOR 5785) the ICS works on the following projects:

P2: Neural ODE training via stochastic control and uncertainty quantification

Principle Investigator: Prof. Dr.-Ing. Timm Faulwasser

Data-driven and learning-based approaches for modelling of dynamic systems and for design of control laws have gained prominence in recent years. Due to their universal approximation properties, neural networks in different variants and architectures are among the most frequently considered learning methods in systems and control. In contrast to this trend, this project does not ask what machine learning can do for control. Rather we explore the question of how systems and control methods can be beneficial in the design and analysis of training formulations for neural networks.

Specifically, Project P2 considers Neural Ordinary Differential Equations (NODEs) and their explicit discretizations which take the form of Residual Networks (ResNets). We explore how generalization properties of neural networks can be directly considered in the training problems and how system-theoretic dissipativity notions of optimal control problems allow for performance-preserving pruning of trained networks. To this end, we investigate novel data informativity notions tailored to neural networks. Finally, we explore how stochastic control concepts, i.e. feedback policies, can be leveraged to design neural networks with quantifiable generalization properties. The investigated methods are evaluated on benchmark problems stemming from the machine learning literature and on systems and control specific benchmarks developed in the research unit ALeSCo.

Project P6: Benchmarks for active learning in systems and control

Co-Principal Investigator: Prof. Dr.-Ing. Timm Faulwasser

Co-Principal Investigator: Prof. Dr.-Ing. Sandra Hirche

Evaluation and benchmarks are crucial for transparently comparing methods across various fields, including machine learning, optimization, and control systems. Datasets and test problems exist for supervised learning, different branches of control, and robotics. However, a notable gap exists in benchmarks focused on active learning for control of dynamic systems. This project aims to address and bridge this gap by developing a Python package containing representative and challenging scenarios to evaluate and compare active learning techniques for systems and control. We propose evaluation procedures equipped with tailored assessment metrics. To ensure that relevant problem settings are covered, our benchmark focuses on two application domains: multi-energy systems and robotics.

In multi-energy systems, we create scalable test problems for energy distribution systems, integrating real-world datasets, realistic power profiles, and weather data. The benchmark will address varying complexity levels, from known to partially unknown system dynamics of individual nodes to their interaction in constrained network settings. This setting covers stochastic disturbances, parametric drifts, and uncertain or unknown system topologies.

In the robotics domain, we develop benchmarks for autonomous navigation and manipulation tasks for unmanned underwater vehicles and soft robotic arm models. These benchmarks will consider uncertain adjustable levels of availability of information about model parameters, system states and disturbances. Experimentally derived sensor noise and external disturbance models will help to reduce the gap between simulation and reality.