Ober­se­mi­nar An­ge­wand­te Ma­the­ma­tik

17.07.2024, 11:00 ct, A 3 301: Dr. Ra­pha­el Ger­lach

In this talk a framework for the global numerical analysis of infinite-dimensional dynamical systems is presented. By utilizing embedding techniques a dynamically equivalent finite-dimensional system, the so-called core dynamical system (CDS), is constructed. This system is then used for the numerical approximation of corresponding embedded invariant sets such as the embedded attractor or embedded unstable manifolds. Here, the focus lies on adapting set-oriented numerical tools, that generate coverings of the set of interest, to the CDS. As part of this, the subdivision scheme is also extended to parameter-dependent systems which allows to efficiently track a parameter-dependent attractor.

As the CDS heavily relies on the choice of an appropriate observation map, in the second part of this talk suitable numerical realizations of the CDS for delay differential equations and partial differential equations will be presented. Moreover for a subsequent geometric analysis a manifold learning technique called diffusion maps is considered, which reveals the intrinsic geometry of the embedded invariant set. In this context a set-oriented landmark selection scheme and an intrinsic dimension estimator is developed.
Finally, the numerical methods are applied to some well-known (infinite-dimensional) dynamical systems, such as the Mackey-Glass delay differential equation, the Kuramoto-Sivashinsky equation and plane Poiseuille flow.

11.07.2024, 14:00 ct, TP21.1.26: Dr. Ben­net Geb­ken

In smooth optimization, the speed of convergence of a solution method is typically derived using Taylor expansions of the objective function or its gradient. In nonsmooth optimization, this strategy cannot be applied anymore, as there is no suitable generalization of a Taylor expansion for a nonsmooth function. As a result, although many different solution methods for nonsmooth optimization have been proposed, the speeds of convergence of these methods are rarely discussed.
In this talk, I will present a novel approach for analyzing the convergence behavior in nonsmooth optimization based on the epsilon-subdifferential. More precisely, given a converging sequence and upper bounds for the distance of the epsilon-subdifferential to zero for vanishing epsilon, we can derive a sharp estimate for the distance of said sequence to the minimizer. The assumptions we require for this are polynomial growth around the minimizer and, depending on the order of growth, a higher-order generalization of semismoothness. After giving an overview of the assumptions and the theoretical results, I will show how these results lead to a better understanding of the behavior of gradient sampling methods.include variational odes and pdes.

13.06.2024, 14:00 st, TP21.1.26: Dr. Chris­ti­an Of­fen

I will show how to use Gaussian Process regression to learn variational dynamical systems from data. From the statistical framework uncertainty quantification for observables such as the Euler-Lagrange operator and Hamiltonians can be derived. The regression method can be shown to converge, overcoming the technical difficulty that variational descriptions are highly non-unique.
Numerical examples include variational odes and pdes.

06.06.2024, 15:00 ct, TP21.1.26: Be­ne­dikt Brück (Teil 2)

In diesem Vortrag werden periodischen Lösungen für eine Brüsselator-PDE untersucht. Es wird ein Vergleich zwischen verschiedenen Auflösungen gemacht. Daraus sollen Erkenntnisse darüber geschlossen werden, ob die Auflösung die Genauigkeit verbessern konnte und somit die Pfadverfolgung weiter voran schreiten konnte. Im Weiteren werde ich auf Anpassung des Algorithmus eingehen, wodurch ich die Effizienz verbessern konnte. Die Ergebnisse werden mit aussagekräftigen Animationen und Grafiken untermauert.

16.05.2024, 16:00 ct, TP21.1.26: Be­ne­dikt Brück (Teil 1)

In diesem Vortrag werden periodischen Lösungen für eine Brüsselator-PDE untersucht. Es wird ein Vergleich zwischen verschiedenen Auflösungen gemacht. Daraus sollen Erkenntnisse darüber geschlossen werden, ob die Auflösung die Genauigkeit verbessern konnte und somit die Pfadverfolgung weiter voran schreiten konnte. Im Weiteren werde ich auf Anpassung des Algorithmus eingehen, wodurch ich die Effizienz verbessern konnte. Die Ergebnisse werden mit aussagekräftigen Animationen und Grafiken untermauert.

Vergange Vorträge

09.11.2023, 16:00 ct, TP21.1.26: Dr. Sö­ren von der Gracht

Many real-world interconnected systems are governed by non-pairwise interactions between agents frequently referred to as higher order interactions. The resulting higher order interaction structure can be encoded by means of a hypergraph or hypernetwork. This talk will focus on dynamics of such hypernetworks. We define a class of maps that respects the higher order interaction structure, so-called admissible maps, and investigate how robust patterns of synchrony can be classified. Interestingly, these are only defined by higher degree polynomial admissible maps. This means that, unlike in classical networks, cluster synchronization on hypernetworks is a higher order, i.e., nonlinear effect. This feature has further implications for the dynamics. In particular, it causes the phenomenon of ``reluctant synchrony breaking'' on hypernetworks, which occurs when bifurcating solutions lie close to a non-robust synchrony space.

23.11.2023, 16:00 ct, TP21.1.26: Kon­stan­tin Sonn­tag

This talk is dedicated to a common descent method designed for nonsmooth multiobjective optimization problems (MOPs) with objective functions defined on a general Hilbert space that are only locally Lipschitz continuous. The only strategy to handle nonsmooth MOPs in infinite dimensions besides the presented method relies on scalarization techniques, which are not suitable for MOPs with nonconvex objective functions or for MOPs with more than two objective functions. The class of nonsmooth MOPs on infinite dimensional Hilbert spaces is particularly important since it allows the formulation of PDE-constrained MOPs.

For the analysis of the presented method, we first introduce optimality conditions suitable for nonsmooth MOPs. We generalize the so-called Goldstein epsilon-subdifferential to the multiobjective setting in Hilbert spaces and describe its main properties.

Then, we introduce the mentioned descent method. The method uses an approximation of the epsilon-Goldstein subdifferential to compute a common descent direction that provides sufficient descent for all objective functions. In the main result, we show that, under reasonable assumptions, the method produces sequences that possess Pareto critical accumulation points.

Finally, we present the behaviour of the common descent method for a (PDE-constrained) multiobjective obstacle problem in one and two spatial dimensions. We show that the method is capable of producing several different optimal solutions and discuss the behaviour of the approximated subdifferential.

30.11.2023, 16:00 ct, O 1.258: Dr. Na­ta­sa Con­rad

Online social media are nowadays an integral part of people's everyday life that can influence our behaviour and opinions. Despite recent advances, the changing role of traditional media and the emerging role of "influencers" are still not well understood, and the implications of their strategies in the attention economy even less so. In this talk, we will propose a novel agent-based model (ABM) that aims to model how individuals (agents) change their opinions (states) under the impact of media and influencers. We will show the rich behavior of this ABM in different regimes and how different opinion formations can emerge, e.g. fragmentation. In the limit of infinite number of agents, we will derive a corresponding mean-field model given by a PDE. Based on the mean-field model, we will show how strategies of influencers can impact the overall opinion distribution and that optimal control strategies allow other influencers (or media) to counteract such attempts and prevent further fragmentation of the opinion landscape.

 

Dieser Vortrag ist gleichzeitig auch im Kolloquium Angewandte Mathematik angekündigt.

14.12.2023, 16:00 ct, TP21.1.26: Dr. Ben­net Geb­ken

At first glance, smooth multiobjective optimization (MOO) and nonsmooth single-objective optimization (NSO) are two distinct subclasses of general optimization. But upon closer analysis, it turns out that there are several parallels. When considering first-order information then, in both areas, there is not only one but multiple gradients that have to be considered simultaneously: In MOO, these are the gradients of the different objective functions, and in NSO, these are all the subgradients from the Clarke subdifferential. As such, there are strong structural similarities when considering results like optimality conditions and descent directions.
In addition to theoretical results, there are also applications where MOO and NSO naturally meet: In many practical problems, a weighted, nonsmooth regularization term is added to a smooth objective function to enforce additional properties of the solution. For example, a sparse minimizer of a function can be found by adding a weighted L1-norm to the function. By varying the weight (also known as the regularization parameter), minimizers with varying degrees of regularity can be computed. Traditionally, these problems are treated via NSO. But since a regularized objective function can be interpreted as a simple weighted sum scalarization, regularization problems may also be treated via MOO.
In this talk, I will present several of these similarities and show how they can be used to obtain new insights and results.

11.01.2024, 16:00 ct, O2.267: Jun.-Prof. Dr. Pe­ter Kling

Population protocols and related models allow to study the dynamics of distributed systems consisting of a vast number of simple and identical agents. The standard model assumes a complete network of $n$ agents modeled as simple finite state machines. Pairwise interactions between agents happen either adversarially or in a randomized way and cause the agents to update their respective state, depending on their own state and that of their interaction partner.

Despite their simplicity, population protocols can solve fundamental distributed problems like leader election, majority, or consensus problems. My talk will start with a general introduction into the topic and then dive into some of our recent and ongoing work on how to efficiently solve consensus-related problems in population protocols (and variants).
 

Dieser Vortrag ist gleichzeitig auch im Kolloquium Angewandte Mathematik angekündigt.

Sie interessieren sich für: