Conference on
Mathematics of Machine Learning 2025

September 22nd - 25th, 2025

Hamburg University of Technology (TU Hamburg)

Audimax II
Denickestraße 22
21073 Hamburg
Germany

Go to map

    Conference Goal

    In recent years, the field of Machine Learning has made significant progress in theory and applications. This success is rooted in the mutual stimulation of mathematical insight and experimental studies. On the one hand, mathematics allows to conceptualize and formalize core problems within learning theory, leading, for instance, to performance bounds for learning algorithms. On the other hand, experimental studies confirm theoretical predictions and instigate new directions in theoretical research. This meeting aims to discuss the interaction between theory and practice, with focus on the current gaps between the two. The talks will be centered around themes including the following.

    • Gradient Methods (gradient optimization, stochastic gradient, natural gradients, gradient applied to deep networks, ...)
    • Natural Geometric Structures (Information Geometry, optimal transport geometry, ...)
    • Generalisation Theory (statistical learning theory, complexity measures, Ill-posed inverse problems, regularization, implicit bias...)
    • Functional analytical tools (approximation theory, harmonic analysis, ...)
    • Overparametrization and random matrix theory (neural tangent kernel, lazy training, convergence of gradient descent, generalization bounds, ...)

    Confirmed Keynote Speakers

    * Jürgen Jost will deliver a speech dedicated to our keynote speaker Sayan Mukherjee who sadly passed away on March 31, 2025, in Leipzig. He was Humboldt Professor and played a key role in AI research of the Leipzig University and the Max Planck Institute for Mathematics in the Sciences. His death is a big loss. More information by the MPI MiS.

    Organisers

    • Nihat Ay   Hamburg University of Technology, Germany, and Santa Fe Institute, USA
    • Martin Burger   DESY and University of Hamburg, Germany
    • Benjamin Gess   TU Berlin and MPI for Mathematics in the Sciences, Germany
    • Guido Montúfar   UCLA, USA, and MPI for Mathematics in the Sciences, Germany

    Scientific Committee

    Conference Program

    Monday, Sept 22, 2025
    09:30 - 09:50 Welcome address
    09:50 - 10:40 Gabriele Steidl (TU Berlin, Germany)

    Telegrapher’s Generative Model via Kac Flows ▼

    We propose a new generative model based on the damped wave equation, also known as telegrapher’s equation. Similar to the diffusion equation and Brownian motion, there is a Feynman-Kac type relation between the telegrapher’s equation and the stochastic Kac process in 1D. The Kac flow evolves stepwise linearly in time, so that the probability flow is Lipschitz continuous in the Wasserstein distance and, in contrast to diffusion flows, the norm of the velocity is globally bounded. Furthermore, the Kac model has the diffusion model as its asymptotic limit. We extend these considerations to a multi-dimensional stochastic process which consists of independent 1D Kac processes in each spatial component. We show that this process gives rise to an absolutely continuous curve in the Wasserstein space and compute the conditional velocity field starting in a Dirac point analytically. Using the framework of flow matching, we train a neural network that approximates the velocity field and use it for sample generation. Our numerical experiments demonstrate the scalability of our approach, and show its advantages over diffusion models. This is joint work with Richard Duong, Jannis Chemseddine and Peter K. Friz.
    10:40 - 11:10 Coffee Break
    11:10 - 11:35 Christoph Lampert (Institute of Science and Technology, Austria)

    Generalization Guarantees for Multi-task and Meta-learning ▼

    tba
    11:35 - 12:00 Simon Weissmann (University of Mannheim, Germany)

    Almost sure convergence rates for stochastic gradient methods ▼

    In this talk, we present recent advances in establishing almost sure convergence rates for stochastic gradient methods. Stochastic gradient methods are among the most important algorithms in training machine learning problems. While classical assumptions such as strong convexity allow a simple analysis, they are rarely satisfied in applications. In recent years, global and local gradient domination properties have shown to be a more realistic replacement of strong convexity. They were proved to hold in diverse settings such as (simple) policy gradient methods in reinforcement learning and training of deep neural networks with analytic activation functions. We prove almost sure convergence rates of the last iterate for stochastic gradient descent (with and without momentum) under global and local gradient domination assumptions. The almost sure rates get arbitrarily close to recent rates in expectation. Finally, we demonstrate how to apply our results to the training task in both supervised and reinforcement learning. This is joint work with Waiss Azizian, Leif Döring and Sara Klein.
    12:00 - 13:00 Lunch (Building N)
    13:00 - 13:50 Lénaïc Chizat (EPFL, Switzerland)

    Title: tba ▼

    tba
    13:50 - 14:15 Viktor Stein (TU Berlin, Germany)

    Wasserstein Gradient Flows for Moreau Envelopes of f-Divergences in Reproducing Kernel Hilbert Spaces ▼

    Commonly used f-divergences between measures, e.g., the Kullback-Leibler divergence, are subject to limitations regarding the support of the involved measures. A remedy is regularizing the f-divergence by a squared maximum mean discrepancy (MMD) associated with a characteristic kernel. We use the kernel mean embedding to show that this regularization can be rewritten as the Moreau envelope of some function on the associated reproducing kernel Hilbert space. Then, we exploit well-known results on Moreau envelopes in Hilbert spaces to analyze the MMD-regularized f-divergences, particularly their gradients. Subsequently, we use our findings to analyze Wasserstein gradient flows of MMD-regularized f-divergences. We provide proof-of-the-concept numerical examples for flows starting from empirical measures. Here, we cover f-divergences with infinite and finite recession constants. Lastly, we extend our results to the tight variational formulation of f-divergences and numerically compare the resulting flows. This joint work with Sebastian Neumayer, Nicolaj Rux, and Gabriele Steidl.
    14:15 - 14:40 Michael Murray (University of Bath, United Kingdom)

    Title: tba ▼

    tba
    14:40 - 15:10 Coffee Break
    15:10 - 16:00 Misha Belkin (University of California San Diego, USA)

    Feature learning and "the linear representation hypothesis" for monitoring and steering LLMs ▼

    A trained Large Language Model (LLM) contains much of human knowledge. Yet, it is difficult to gauge the extent or accuracy of that knowledge, as LLMs do not always "know what they know" and may even be unintentionally or actively misleading. In this talk I will discuss feature learning introducing Recursive Feature Machines—a powerful method originally designed for extracting relevant features from tabular data. I will demonstrate how this technique enables us to detect and precisely guide LLM behaviors toward almost any desired concept by manipulating a single fixed vector in the LLM activation space.
    16:00 - 16:25 Armin Iske (University of Hamburg, Germany)

    On the Convergence of Multiscale Kernel Regression under Minimalistic Assumptions ▼

    We analyse the convergence of data regression in reproducing kernel Hilbert spaces (RKHS). This is done under minimalistic (i.e., mild as possible) assumptions on the data and on the kernel. To this end, we first prove convergence in the RKHS norm, for just one fixed kernel. Our results are then transferred to a sequence of multiple scaled kernels, whereby we obtain convergence rates of multiscale kernel regression with respect to both the RKHS norm and for the maximum norm. Supporting numerical results are finally discussed.
    16:25 - 16:50 Christoph Brune (University of Twente, Netherlands)

    Deep Networks are Reproducing Kernel Chains ▼

    tba
    16:50 - 17:20 Coffee Break
    17:20 - 17:45 Marcello Carioni (University of Twente, Netherlands)

    Atomic Gradient Descents ▼

    tba
    17:45 - 18:10 Nisha Chandramoorthy (University of Chicago, USA)

    When, why and how are some generative models robust? ▼

    tba
    Tuesday, Sept 23, 2025
    09:00 - 09:50 Gitta Kutyniok (LMU Munich, Germany)

    Reliable and Sustainable AI: From Mathematical Foundations to Next Generation AI Computing ▼

    The current wave of artificial intelligence is transforming industry, society, and the sciences at an unprecedented pace. Yet, despite its remarkable progress, today’s AI still suffers from two major limitations: a lack of reliability and excessive energy consumption. This lecture will begin with an overview of this dynamic field, focusing first on reliability. We will present recent theoretical advances in the areas of generalization and explainability -- core aspects of trustworthy AI that also intersect with regulatory frameworks such as the EU AI Act. From there, we will explore fundamental limitations of existing AI systems, including challenges related to computability and the energy inefficiency of current digital hardware. These challenges highlight the pressing need to rethink the foundations of AI computing. In the second part of the talk, we will turn to neuromorphic computing -- a promising and rapidly evolving paradigm that emulates biological neural systems using analog hardware. We will introduce spiking neural networks, a key model in this area, and share some of our recent mathematical findings. These results point toward a new generation of AI systems that are not only provably reliable but also sustainable.
    09:50 - 10:15 Parvaneh Joharinad (Leipzig University and MPI for Mathematics in the Sciences, Germany)

    Title: tba ▼

    tba
    10:15 - 10:40 Diaaeldin Taha (MPI for Mathematics in the Sciences, Germany)

    Title: tba ▼

    tba
    10:40 - 11:10 Coffee Break
    11:10 - 11:35 Amanjit Singh (University of Toronto, Canada)

    Bregman-Wasserstein gradient flows ▼

    tba
    11:35 - 12:00 Adwait Datar (Hamburg University of Technology, Germany)

    Does the Natural Gradient Really Outperform the Euclidean Gradient? ▼

    tba
    12:00 - 13:00 Lunch (Building N)
    13:00 - 14:00 Poster Session 1 (Building N)
    14:00 - 14:25 Semih Cayci (RWTH Aachen University, Germany)

    Convergence of Gauss-Newton in the Lazy Training Regime: A Riemannian Optimization Perspective ▼

    tba
    14:25 - 14:50 Johannes Müller (TU Berlin, Germany)

    Functional Neural Wavefunction Optimization ▼

    We propose a framework for the design and analysis of optimization algorithms in variational quantum Monte Carlo, drawing on geometric insights into the corresponding function space. The framework translates infinite-dimensional optimization dynamics into tractable parameter-space algorithms through a Galerkin projection onto the tangent space of the variational ansatz. This perspective unifies existing methods such as stochastic reconfiguration and Rayleigh-Gauss-Newton, provides connections to classic function-space algorithms, and motivates the derivation of novel algorithms with geometrically principled hyperparameter choices. We validate our framework with numerical experiments demonstrating its practical relevance through the accurate estimation of ground-state energies for several prototypical models in condensed matter physics modeled with neural network wavefunctions. This is joint work with Victor Armegioiu, Juan Carrasquilla, Siddhartha Mishra, Jannes Nys, Marius Zeinhofer, and Hang Zhang.
    14:50 - 15:15 Alexander Friedrich (Umeå University, Sweden)

    A First Construction of Neural ODES on M-Polyfolds ▼

    tba
    15:15 - 15:40 Thomas Martinetz (University of Lübeck, Germany)

    Good by Default? Generalization in Highly Overparameterized Networks ▼

    tba
    15:40 - 16:10 Coffee Break
    16:10 - 17:00 Francis Bach (INRIA Paris Centre, France)

    Denoising diffusion models without diffusions ▼

    Denoising diffusion models have enabled remarkable advances in generative modeling across various domains. These methods rely on a two-step process: first, sampling a noisy version of the data—an easier computational task—and then denoising it, either in a single step or through a sequential procedure. Both stages hinge on the same key component: the score function, which is closely tied to the optimal denoiser mapping noisy inputs back to clean data. In this talk, I will introduce an alternative perspective on denoising-based sampling that bypasses the need for continuous-time diffusion processes. This framework not only offers a fresh conceptual angle but also naturally extends to discrete settings, such as binary data. Joint work with Saeed Saremi and Ji-Won Park (https://arxiv.org/abs/2305.19473, https://arxiv.org/abs/2502.00557).
    19:00 - 22:00 Dinner
    Wednesday, Sept 24, 2025
    09:00 - 09:50 Stefanie Jegelka (MIT, USA, and TU Munich, Germany)

    Title: tba ▼

    tba
    09:50 - 10:15 Marco Mondelli (Institute of Science and Technology, Austria)

    Learning in the Age of LLMs: Theoretical Insights into Knowledge Distillation and Test-Time-Training ▼

    The availability of powerful models pre-trained on a vast corpus of data has spurred research on alternative training methods, and the overall goal of this talk is to give theoretical insights through the lens of high-dimensional regression. Most of the talk will focus on knowledge distillation where one uses the output of a surrogate model as labels to supervise the training of a target model and, specifically, the phenomenon of weak-to-strong generalization in which a strong student outperforms the weak teacher from which the task is learned. We provide a sharp characterization of the risk of the target model for ridgeless, high-dimensional regression, under two settings: (i) model shift, where the surrogate model is arbitrary, and (ii) distribution shift, where the surrogate model is the solution of empirical risk minimization with out-of-distribution data. As a consequence, we identify the form of the optimal surrogate model, which reveals the benefits and limitations of discarding weak features in a data-dependent fashion. This has the interpretation that weak-to-strong training, with the surrogate as the weak model, can provably outperform training with strong labels under the same data budget, but it is unable to improve the data scaling law. Finally, if time permits, I will briefly discuss test-time training (TTT) where one explicitly updates the weights of a model to adapt to the specific test instance. By focusing on linear transformers when the update rule is a single gradient step, our theory delineates the role of alignment between pre-training distribution and target task, and it quantifies the sample complexity of TTT including how it can significantly reduce the sample size required for in-context learning.
    10:15 - 10:40 Yury Korolev (University of Bath, United Kingdom)

    Large-time dynamics in transformer architectures with layer normalisation ▼

    tba
    10:40 - 11:10 Coffee Break
    11:10 - 11:35 Leon Bungert (University of Würzburg, Germany)

    Robustness on the interface of geometry and probability ▼

    In this talk I will present the latest developments in the analysis of adversarial machine learning. For this I will build on the geometric interpretation of adversarial training as regularization problem for a nonlocal perimeter of the decision boundary. This perspective allows one to use tools from calculus of variations to derive the asymptotics of adversarial training for small adversarial budgets as well as to rigorously connect it to a mean curvature flow of the decision boundary. We also show that adversarial training is embedded in a larger family of probabilistically robust problems. This is joint work with N. García Trillos, R. Murray, K. Stinson, and T. Laux, and others.
    11:35 - 12:00 Martin Lazar (University of Dubrovnik, Croatia)

    Be greedy and learn: efficient and certified algorithms for parametrized optimal control problems ▼

    We consider parametrized linear-quadratic optimal control problems and provide their online-efficient solutions by combining greedy reduced basis methods and machine learning algorithms. To this end, we first run the offline part of the greedy control algorithm, which builds a reduced basis for the manifold of solutions. Afterwards, we apply machine learning surrogates to accelerate the online evaluation of the reduced model. The error estimates proven for the greedy procedure are further transferred to the machine learning models and thus allow for efficient a posteriori error certification. We discuss the computational costs of all considered methods in detail and show by means of two numerical examples the tremendous potential of the proposed methodology.
    12:00 - 13:00 Lunch (Building N)
    13:00 - 14:00 Poster Session 2 (Building N)
    14:00 - 14:50 Frank Nielsen (Sony Computer Science Laboratories Inc., Japan)

    Recent perspectives on Bregman divergences ▼

    Bregman divergences (BDs) play a prominent role in information sciences and engineering. In this talk, we will introduce some recent extensions and generalizations of BDs with applications: First, we extend the BDs to two comparable generators to define duo Bregman pseudo-divergences and show applications on truncated exponential families including the truncated normal family. Second, we show how to reconstruct statistical divergences from integral-based Bregman generators. Notably, by selecting the partition function rather than the cumulant function of an exponential family, we recover the extended Kullback-Leibler divergence on unnormalized densities. Third, by constraining the parameter space of BDs to submanifolds, we discuss some properties of curved BDs and Hessian submanifolds. Fourth, we show that the generalized Legendre transforms of Artstein-Avidan--Milman axiomatized as inverse-ordering invertible transforms find root in the dual coordinatization of dual Hessian structures of information geometry. Finally, we conclude with the concept of maximal invariants which provides a valuable perspective for analyzing and understanding the structural forms of statistical divergences.
    14:50 - 15:15 Vahid Shahverdi (KTH, Sweden)

    Mapping the Shape of Learning: An Algebraic Perspective on Neural Networks ▼

    In this talk, I will explore the hidden geometry of neural networks through the lens of algebraic geometry. Central to this perspective is the neuromanifold---the set of all functions that can be realized by a given architecture. I will show how key invariants of this space, such as its dimension and algebraic degree, provide meaningful insights into a network’s expressivity and sample complexity. Singularities within the neuromanifold, often corresponding to simpler subnetworks, arise naturally and tend to influence the trajectories of training dynamics, contributing to the network’s implicit bias. To conclude, I will discuss how studying the fibers of the parameterization map reveals structural symmetries and sheds light on issues of identifiability. Together, these ideas offer a geometric foundation for understanding the behavior and limitations of neural models.
    15:15 - 15:40 Mariia Seleznova (LMU Munich, Germany)

    Neural Tangent Kernel Alignment as a Lens on Trained Neural Networks ▼

    The Neural Tangent Kernel (NTK) has become central to the theoretical analysis of deep learning, particularly in the infinite-width limit where training dynamics are linearized. In this regime, the NTK remains fixed and label-agnostic, capturing only the input space geometry rather than task-specific structure. In contrast, practical neural networks exhibit NTK alignment, where the NTK evolves during training and aligns its dominant eigenfunctions with task-relevant directions. This phenomenon is closely linked to feature learning and generalization, suggesting a deeper role in representation learning. This talk presents two studies on the implications of NTK alignment. First, we examine its connection to Neural Collapse (NC)—a geometric structure observed in the last-layer features of classifiers near the end of training—and show that NC emerges naturally from perfect NTK alignment under suitable conditions. Second, we explore how the low-rank gradient structure induced by NTK alignment enables spectral methods like PCA to be effective even in the high-dimensional setting of modern neural networks. Building on this, we introduce GradPCA, a gradient-based method for out-of-distribution (OOD) detection, which achieves robust performance across image benchmarks. Together, these findings highlight NTK alignment as a fundamental aspect of deep learning theory and practice.
    15:40 - 16:10 Coffee Break
    16:10 - 17:00 Jürgen Jost (MPI for Mathematics in the Sciences, Germany)

    Geometric and statistical methods of data analysis. In memoriam Sayan Mukherjee ▼

    tba
    17:00 - 17:25 Marzieh Eidi (MPI for Mathematics in the Sciences/ScaDS AI Institute, Germany)

    Geometric learning in complex networks ▼

    tba
    17:25 - 17:50 Sebastian Kassing (TU Berlin, Germany)

    On the effect of acceleration and regularization in machine learning ▼

    tba
    Thursday, Sept 25, 2025
    09:00 - 09:50 Markos Katsoulakis (University of Massachusetts Amherst, USA)

    Hamilton-Jacobi Equations, Mean-Field Games, and Uncertainty Quantification for Robust Machine Learning ▼

    Hamilton-Jacobi (HJ) equations and Mean-Field Games (MFGs) provide a natural mathematical language that unites ideas from stochastic control, optimal transport, and information theory for analyzing, designing, and improving the robustness of many modern machine learning (ML) models. We show how fundamental classes of generative models, including continuous-time normalizing flows and score-based diffusion models, emerge intrinsically from MFG formulations under different particle dynamics, cost functionals, information-theoretic divergences, and probability metrics, with analogies and connections to Wasserstein gradient-flows. The forward-backward PDE structure of MFGs offers both analytical insights and informs the development of faster, data-efficient and robust algorithms. In particular, the regularity theory of HJ equations, combined with model-uncertainty quantification, provides provable performance and robustness guarantees for generative models and complex neural architectures such as transformers. Our theoretical analysis is complemented by extensive numerical validations and applications, with examples from applied mathematics and widely used ML benchmarks.
    09:50 - 10:15 Pavel Gurikov (Hamburg University of Technology, Germany)

    Physics-Informed Machine Learning for Sustainable Process Design: Predicting Solubility in Green Solvents ▼

    We present a domain-aware machine learning framework for predicting solubility in supercritical CO₂, a green solvent used in sustainable chemical processing. Our approach combines robust regression for outlier detection with supervised models (CatBoost, graph neural networks) informed by thermodynamic descriptors. This integration of physics-based features and data curation significantly improves model accuracy and generalizability. The methodology demonstrates how physics-informed ML and systematic data validation can support reliable modeling in scientific domains with noisy or inconsistent data.
    10:15 - 10:40 Sebastian Götschel (Hamburg University of Technology, Germany)

    Hard-constraining Boundary Conditions for Physics-Informed Neural Operators ▼

    Machine learning-based techniques, such as physics-informed neural networks and physics-informed neural operators, are increasingly effective at solving complex systems of partial differential equations. Boundary conditions in these models can be enforced weakly by penalizing deviations in the loss function or strongly by training a solution structure that inherently matches the prescribed values and derivatives. While the former approach is straightforward to implement, the latter can significantly enhance accuracy and reduce training times. However, previous approaches to strongly enforcing Neumann or Robin boundary conditions require a domain with a fully C1 boundary and, as we demonstrate, can lead to instability if those boundary conditions are posed on a segment of the boundary that is piecewise C1 but only C0 globally. We introduce a generalization of the approach by Sukumar & Srivastava (https://doi.org/10.1016/j.cma.2021.114333) and a new approach based on orthogonal projections that overcome this limitation. The performance of these new techniques is compared against weakly and semi-weakly enforced boundary conditions for the scalar Darcy flow equation and the stationary Navier-Stokes equations. This is joint work with Niklas Göschel and Daniel Ruprecht (Hamburg University of Technology, Germany)
    10:40 - 11:10 Coffee Break
    11:10 - 11:35 Jan Gerken (Chalmers University of Technology, Sweden)

    Emergent Equivariance in Deep Ensembles ▼

    tba
    11:35 - 12:00 Timm Faulwasser (Hamburg University of Technology, Germany)

    The Optimal Control Perspective on Deep Neural Networks – Early Exits, Insights, and Open Problems ▼

    tba
    12:00 - 13:00 Lunch (Building N)
    13:00 - 13:50 Matus Telgarsky (New York University, USA)

    Mathematical and sociological questions in deep learning and large language models ▼

    This talk will cover open problems in deep learning and large language models. They will be both mathematical (e.g., regarding the analysis of first-order methods and characterizing the power of chain-of-thought methods), and sociological (e.g., regarding career dilemmas facing junior researchers, difficulties teaching, and difficulties finding tractable research paths).
    13:50 - 14:15 Ahmed Abdeljawad (Radon Institute for Computational and Applied Mathematics, Austria)

    Approximation Theory of Shallow Neural Networks ▼

    tba
    14:15 - 14:40 Jethro Warnett (University of Oxford, United Kingdom)

    Stein Variational Gradient Descent ▼

    tba
    14:40 - 15:10 Coffee Break
    15:10 - 16:00 Kenji Fukumizu (Institute of Statistical Mathematics, Japan)

    Pairwise Optimal Transports and All-to-All Flow-Based Condition Transfer ▼

    In this work, we propose a flow-based method for learning all-to-all transfer maps among conditional distributions that approximates pairwise optimal transport. The proposed method addresses the challenge of handling the case of continuous conditions, which often involve a large set of conditions with sparse empirical observations per condition. We introduce a novel cost function that enables simultaneous learning of optimal transports for all pairs of conditional distributions. Our method is supported by a theoretical guarantee that, in the limit, it converges to the pairwise optimal transports among infinite pairs of conditional distributions. The learned transport maps are subsequently used to couple data points in conditional flow matching. We demonstrate the effectiveness of this method on synthetic and benchmark datasets, as well as on chemical datasets in which continuous physical properties are defined as conditions.
    16:00 - 16:25 Vitalii Konarovskyi (University of Hamburg, Germany)

    Stochastic Modified Flows, Mean-Field Limits and Dynamics of Stochastic Gradient Descent ▼

    We introduce a new class of limiting dynamics for stochastic gradient descent in the small learning rate regime, called stochastic modified flows. These dynamics are described by stochastic differential equations driven by infinite-dimensional noise, offering two key improvements over the so-called stochastic modified equations: they feature regular diffusion coefficients and capture multi-point statistics. Additionally, we present distribution-dependent stochastic modified flows, which we prove describe the fluctuating limiting behaviour of stochastic gradient descent in the combined small learning rate and infinite-width scaling regime. This is joint work with Benjamin Gess, Rishabh Gvalani, and Sebastian Kassing.
    16:25 - 16:50 Tim Jahn (TU Berlin, Germany)

    Learning Jump–Diffusion Dynamics from Irregularly-Sampled Data via Trajectory Generator Matching ▼

    tba
    16:50 - 17:20 Coffee Break
    17:20 - 17:45 Gianluca Finocchio (University of Vienna, Austria)

    Model-Free Identification in Ill-Posed Regression ▼

    tba
    17:45 - 18:10 Kainth Rishi Sonthalia (Boston College, USA)

    Generalization with Non-Standard Spectra ▼

    tba

    Application for Participation and Presentation

    If you would like to participate in the conference, you are kindly invited to apply here, where you can also submit a title and an abstract (up to 2 pages, pdf) in case you would like to contribute a talk or/and a poster. The application deadline is May 4th, 2025. A notification email about the outcome of your application will be sent out in early June, 2025. The registration fee for accepted participants is 60 Euros.

    We have a limited number of travel grants for student participants. If you are interested in applying for travel support please indicate this in your application statement and provide contact information of a reference person.

    For registration and submission issues, contact Hojat Naghavi: dsf-it(at)tuhh.de

    Administrative Contact

    Imke Christiane Bartscher and Daniela Gascon Bosqued: dsf-conference(at)tuhh.de

     

    Please be wary if a company directly contacts you regarding payment for accommodation in Hamburg: this is fraud/phishing.

    Conference Poster

    You can download the conference poster from here.