FB 6 Mathematik/Informatik

Institut für Mathematik


Osnabrück University navigation and search


Main content

Top content

WS 2018/19

09.01.2019 um 17:15 Uhr in 69/125:

Dr. Felix Voigtländer (Katholische Universität Eichstätt-Ingolstadt)

Approximation Theoretic Properties of Deep ReLU Neural Networks

Studying the approximation theoretic properties of neural networks with smooth activation function is a classical topic.The networks that are used in practice, however, most often use the non-smooth ReLU activation function. Despite the recent incredible performance of such networks in many classification tasks, a solid theoretical explanation of this success story is still missing.

In this talk, we will present recent results concerning the approximation theoretic properties of deep ReLU neural networks which help to explain some of the characteristics of such networks; in particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks, which is not the case for most smooth activation functions.
We emphasize though that these approximation theoretic properties do not explain why simple algorithms like stochastic gradient descent work so well in practice, or why deep neural networks tend to generalize so well; we purely focus on the expressive power of such networks.

As a model class for classifier functions we consider the class of (possibly discontinuous) piecewise smooth functions for which the different "smooth regions" are separated by smooth hypersurfaces.
Given such a function, and a desired approximation accuracy, we construct a neural network which achieves the desired approximation accuracy, where the error is measured in L2. We give precise bounds on the required size (in terms of the number of weights) and depth of the network, depending on the approximation accuracy, on the smoothness parameters of the given function, and on the dimension of its domain of definition. Finally, we show that this size of the networks is optimal, and that networks of smaller depth would need significantly more weights than the deep networks that we construct, in order to achieve the desired approximation accuracy.

23.01.2019 um 17:15 Uhr in 69/125:

Prof. Dr. Ulrich Bauer (Technische Universität München)

The Morse Theory of Čech and Delaunay Complexes

Given a finite set of points in ℝⁿ and a radius parameter, we consider the Čech, Delaunay–Čech, Delaunay
(alpha shape), and wrap complexes in the light of generalized discrete Morse theory. We prove that the four
complexes are simple-homotopy equivalent by a sequence of simplicial collapses, and the same is true
for their weighted versions. Our results have applications in topological data analysis and in the reconstruction
of shapes from sampled data.

30.01.2019 um 17:15 Uhr in 69/125:

Dr. Nick Vannieuwenhoven (Katholieke Universiteit Leuven)

Tensor Decompositions and their Sensitivity

The tensor rank decomposition or CPD expresses a tensor as a minimum-length linear combination of elementary rank-1 tensors. It has found application in fields as diverse as psychometrics, chemometrics, signal processing and machine learning, mainly for data analysis purposes. In these applications, the theoretical model is oftentimes a low-rank CPD and the elementary rank-1 tensors are usually the quantity of interest. However, in practice, this mathematical model is always corrupted by measurement errors. In this talk, we will investigate the numerical sensitivity of the CPD using techniques from algebraic and differential geometry.

06.02.2019 um 16:15 Uhr in 69/125:

Prof. Dr. Hedwig Gasteiger und Prof. Dr. Martina Juhnke-Kubitzke

Antrittsvorlesungen