GROUP SE­MI­NAR

Our research colloquium takes place every week on Thursday at 2:00 PM (German time). During each meeting, there is a talk on a research topic by a member of our group or an external guest.

UP­CO­MING TALK

Title: Variational integrators for multirate systems

Abstract: It is still a challenging question for numerical integration schemes to efficiently solve multirate systems – systems that include dynamics on different time scales. For the slow motion, a low resolution is sufficient whereas the fast motion dictates a high resolution what comes along with inefficiently high computational costs.
Two strategies already known from literature, multiple time grids and multiple discretization approaches for the split systems, are each embedded in the variational framework to evolve integration schemes that inherit the underlying geometric structure of the continuous multirate system.
Variational integrators of mixed order are derived where polynomials of different degrees are used to approximate the components that act on different time scales together with quadrature rules of different orders to approximate the contributions of the split action. Their numerical properties are analysed and demonstrated numerically. Numerical investigations reveal, that within this approach run-time savings can be achieved while the accuracy stays nearly the same.
Using micro and macro time steps for the fast and slow motion together with appropriate quadrature rules to approximate the split action yields the so called multirate variational integrators.
Some of the presented integrators are reformulated as modified trigonometric integrators allowing to use the modulated Fourier expansion to explain their excellent numerical performance when applied to highly oscillatory systems.
 

Titel: Automatisierte Konstruktion von Runge-Kutta-Verfahren mit Methoden des maschinellen Lernens .

Abstrakt: Die numerische Lösung gewöhnlicher Differentialgleichungen stellt ein zentrales Werkzeug in der mathematischen Modellierung dynamischer Systeme dar. In zahlreichen Anwendungen, insbesondere in der klassischen Mechanik, Elektrodynamik und Molekulardynamik, treten Systeme auf, deren Dynamik durch eine hamiltonsche Struktur geprägt ist. Solche Systeme besitzen charakteristische geometrische Eigenschaften sowie Erhaltungsgrößen, insbesondere die Erhaltung der Gesamtenergie. Klassische numerische Integrationsverfahren sind im Allgemeinen nicht darauf ausgelegt, diese strukturellen Eigenschaften exakt zu bewahren. Dies kann bei Langzeitsimulationen zu qualitativ falschen Lösungen führen, die sich etwa in systematischem Energiedrift oder in der Verletzung physikalischer Invarianten äußern. In den vergangenen Jahrzehnten wurden daher strukturtreue numerische Verfahren entwickelt, die gezielt geometrische Eigenschaften hamiltonscher Systeme berücksichtigen, insbesondere symplektische Runge–Kutta-Verfahren und variationale Integratoren. Die Konstruktion solcher Verfahren erfolgt traditionell analytisch und erfordert eine aufwendige Ableitung geeigneter Butcher-Koeffizienten unter Einhaltung nichtlinearer algebraischer Nebenbedingungen. Diese Vorgehensweise limitiert sowohl die Flexibilität als auch die Anpassungsfähigkeit der resultierenden Integratoren an konkrete Problemklassen.
In jüngerer Zeit wurden datengetriebene Verfahren zur automatisierten Konstruktion numerischer Algorithmen untersucht. Dieses Vorgehen erlaubt es, Algorithmen selbst als parametrische Objekte zu modellieren und deren Parameter auf Basis geeigneter Zielfunktionen zu optimieren. Erste Arbeiten zeigen, dass sich auf diese Weise numerische Integratoren erzeugen lassen, die für spezifische Klassen gewöhnlicher Differentialgleichungen eine verbesserte Genauigkeit oder Stabilität aufweisen. Ein prominentes Beispiel hierfür
ist die Arbeit von Guo [3]. Dort wird ein Runge–Kutta-artiger Integrator als parametrisierte Einschrittabbildung in Form eines neuronalen Netzwerks formuliert und die Knoten werden datengetrieben optimiert. Die Parameteranpassung erfolgt mittels einen stochastischer Gradientenverfahren, den Adam-Verfahren. Der Schwerpunkt liegt dabei auf der Optimierung eines Genauigkeitskriteriums im Sinne einer vorgegebenen Konvergenzordnung. Als Referenz werden klassische Runge–Kutta-Verfahren einer gewünschten Ordnung
herangezogen, und die Loss-Funktion misst den Fehler relativ zu dieser Referenz. Strukturerhaltende Eigenschaften wie Energieerhaltung, Symplektizität oder Symmetriebedingungen werden in [3] hingegen nicht als eigenständige Optimierungsziele erzwungen.
Die vorliegende Arbeit knüpft an diesen Ansatz an, erweitert ihn jedoch wesentlich. Neben einem skalierten Genauigkeitskriterium werden zusätzliche Terme in die Zielfunktion integriert, um physikalisch bzw. geometrisch motivierte Struktureigenschaften gezielt zu berücksichtigen. Insbesondere werden Terme zur näherungsweisen Erhaltung von Erhaltungsgrößen sowie zur Berücksichtigung von Symmetrien wie Rotationssymmetrie und Zeitreversibilität untersucht. Darüber hinaus wird die Extrapolationseigenschaft der
gelernten Integratoren analysiert, sowohl außerhalb des Trainingsbereichs der Schrittweiten als auch auf verwandten Systemen. Dadurch steht nicht allein die Optimierung einer Ordnung im Vordergrund, sondern die Frage, inwieweit datengetriebene explizite Integratoren qualitative Langzeiteigenschaften strukturerhaltender Verfahren auf ausgewählten Problemklassen reproduzieren können.

Title: Decomposition methods for Mixed-Integer Optimal Control Problems with Dwell Time Constraints

Abstract: This project tackles the challenge of controlling dynamical systems with discrete-valued inputs under minimum dwell-time (MDT) constraints, a problem modeled as Mixed-Integer Optimal Control Problems (MIOCPs) that are computationally complex due to their combinatorial nature. To enable real-time solutions, it develops efficient decomposition-based methods: one approximates continuous control via fast switching (CIA), and another separates the problem into sequence optimization (SO) and dwell-time optimization (STO). Three approaches are proposed: Dynamic Programming-inspired for improved approximations, Mixed-Integer Sequence Decomposition for optimal switching without exhaustive search, and Iterative Switching Time Optimization for near-optimal solutions at lower cost. Extensive benchmarks demonstrate these methods’ ability to reduce computation while respecting MDT constraints, advancing practical control for hybrid and multi-mode systems.

PRE­VIOUS TALK

Title: Machine Learning Enhanced Step Size Selection: Improving Controllers with Supervised Learning

Abstract: Adaptive step size control is a key ingredient in modern ODE solvers, aiming to balance accuracy requirements with computational efficiency based on local error information. Standard methods achieve this with embedded error estimates and carefully designed update rules that work well across a wide range of problems, including stiff regimes, but can be conservative in specific settings.

In this talk, I introduce a supervised learning approach that keeps the classical workflow of error estimation, step acceptance, and step size update, but replaces the hand-crafted update formula by a learned rule. The model is trained on examples of near-optimal step size choices derived from embedded Runge–Kutta error estimates and past solver information. After a brief review of the principles behind step size selection, I describe the learning formulation and the data generation pipeline. The talk concludes with numerical experiments comparing the learned controller with standard adaptive strategies.

Title: Multi-Objective Algorithm Configuration with Applications to Sparse Neural Networks

Abstract: Modern AI systems involve many interacting design choices that together define large, heterogeneous configuration spaces. Furthermore, they must balance conflicting objectives such as performance and resource usage. Yet, most automated algorithm configuration methods still focus on optimising for a single objective. In this talk, I present a multi-objective extension of the SMAC framework that directly targets Pareto-optimal configurations using predicted hypervolume improvement and a novel intensification strategy. I demonstrate the effectiveness of our approach across multiple AI domains and illustrate its practical value through a case study on sparse neural network training. The results show that significant efficiency gains are possible with only minor accuracy trade-offs, while also uncovering complex interactions between sparsity and other hyperparameters. This work highlights multi-objective configuration as an important tool for building and understanding efficient and trustworthy AI systems.

 

Bo­ris Wem­be, Uni­ver­si­ty Pa­der­born, Thurs­day 08.01.2026 at 2:00 pm in room TP.21.1.20

Title: Cayley Based-Methods for Quantum Optimal Control Systems

Abstract: This talk present a new family of structure-preserving integrators based on the Cayley commutator-free (CF–Cayley) formulation. The central idea is to replace expensive matrix
exponentials by Cayley transforms, simple linear solves that automatically evolve on the unitary manifold, while avoiding nested commutators. This yields algorithms that are fast, symmetric,
and exactly norm-preserving, making them ideally suited for repeated forward and backward propagations in optimal control algorithms such as Krotov’s method.
Beyond quantum control, this approach provides a general framework for the geometric integration of differential equations on Lie groups, bridging ideas from applied mathematics, physics, and
numerical analysis. It demonstrates how respecting the underlying geometry of a system, rather than merely discretizing its equations, can lead to both deeper insight and practical computational
gains.

Fe­lix Geis­ler, Uni­ver­si­ty Pa­der­born, Thurs­day 11.12.2025 at 2:00 pm in room TP.21.1.20

Title: High-order Cayley based methods for quantum optimal control problems

Abstract: Differential equations posed on quadratic matrix Lie groups arise in the context of classical mechanics and quantum dynamical systems. Lie group numerical integrators preserve the constants of motion defining the Lie group. Thus, they respect important physical laws of the dynamical system, such as unitarity and energy conservation in the context of quantum dynamical systems, for instance.

This thesis concerns the construction, implementation, and numerical analysis of a novel Cayley-based, commutator-free multistep scheme for general differential equations evolving on quadratic Lie groups, with an emphasis on applications in quantum optimal control. The method builds on Cayley commutator-free schemes recently introduced, but, in contrast, is not restricted to linear equations; it can, for example, handle dynamics governed by nonlinear Schrödinger equations.

We conduct numerical experiments to compare accuracy and computational performance with established Lie group integrators. We also discuss applications to quantum optimal control problems whose dynamics are governed by the nonlinear Schrödinger (Gross–Pitaevskii) equation, highlighting a design advantage afforded by the new method in this setting.

Title: Variation integrators for stochatic Hamiltonian systems

Abstract: We develop variational integrators for stochastic Hamiltonian systems. Stochastic Hamilton’s equations are the stochastic extension of Hamilton’s equations, which can be derived from a stochastic version of the phase space principle with an additional stochastic term expressed as a Stratonovich integral of stochastic Hamiltonians. Variational integrators can be constructed from the discretization of the stochastic variational principle. We construct such integrators for systems whose configuration space is a Lie group, and also for systems with advected parameters. We focus on the midpoint method, which provides the most basic strongly convergent integrator, and explore the geometry-preserving properties of the resulting integrator. Strong convergence is proven for the special case of stochastic rigid body equations. Various applications to both finite-dimensional and infinite-dimensional systems are also presented.

Title: Butcher Series for the Convergence Analysis of Iterative Numerical Integrators: SDC and Parareal 

Abstract: We present a Butcher series framework for analyzing the convergence of the Spectral Deferred Correction (SDC) and Parareal methods. SDC iteratively improves a low order approximation to achieve high order accuracy, while Parareal introduces parallelism in time by combining coarse and fine propagators. By expressing both algorithms as Runge-Kutta methods whose Butcher tableaus grow with each iteration, we apply classical Butcher series techniques to study their convergence. For SDC, we obtain convergence results under weaker assumptions, explain and exploit the order jump phenomenon, where the observed order of accuracy exceeds theoretical predictions, and design correctors that consistently produce it. For Parareal, we recover and refine known convergence results, offering problem agnostic insights into the structure of iterative time integration algorithms. This talk is based on joint work with Joscha Fregin (TUHH), Ausra Pogozelskyte, Daniel Ruprecht (TUHH), Gilles Vilmart (Univ. of Geneva).
 

Title: Structure-preserving Machine Learning Paradigms for (Port)-Hamiltonian Systems

Abstract: This talk presents a series of frameworks for structure-preserving learning of Hamiltonian systems. The central theme is how to design learning algorithms that respect the intrinsic
geometric and physical structures of dynamical systems, rather than treating them as black boxes. We begin with the linear setting, introducing a complete learning scheme for
single-input/single-output linear port-Hamiltonian systems. This framework reveals deep connections between classical control-theoretic notions—such as controllability and
observability—and machine learning concepts like structure preservation and expressivity. Moreover, we characterize the set of uniquely identifiable input–output systems as a smooth
manifold with global coordinates, enabling practical parameterizations and efficient learning.
We then turn to nonlinear Hamiltonian systems on Euclidean spaces, where we develop a structure-preserving kernel ridge regression framework. This method leverages a novel
differential reproducing property and Representer Theorem to recover Hamiltonian functions from noisy vector field data, providing strong theoretical guarantees alongside superior
empirical performance.
Finally, we generalize the approach to Hamiltonian systems on Poisson manifolds, encompassing degenerate dynamics with intrinsic invariants (Casimir functions). The
resulting estimator introduces geometric regularization to resolve non-identifiability arising from the degeneracy of the Poisson structure and achieves provable convergence in this
most general setting.
Taken together, these contributions establish a comprehensive theory and methodology for learning Hamiltonian dynamics in a structure-preserving manner, bridging ideas from control
theory, kernel methods, and differential geometry.