Multidimensional moment problem and Stieltjes transform
We discuss a multidimensional moment problem in the term of the Stieltjes transform. To describe the set of solutions of this problem we apply the Schur step-by-step algorithm, which leads to the expansion of these solutions in continuous fractions.
04.07.2023 um 14:00 Uhr in 69/E23
Kumar Harsha (Universität Osnabrück)
Data Compression using Lattice Rules for Machine Learning
The mean squared error is one of the standard loss functions in supervised machine learning. However, calculating this loss for enormous data sets can be computationally demanding. Modifying an approach of J. Dick and M. Feischl [Journal of Complexity 67 (Dec 2021)], we present algorithms to reduce extensive data sets to a smaller size using rank-1 lattice rules. With this compression strategy in the pre-processing step, every lattice point gets a pair of weights depending on the original data and responses, representing its relative importance. As a result, the compressed data makes iterative loss calculations in optimization steps much faster. The proposed compression strategy is highly beneficial for regression problems without an analytical solution. Our derivation of the formulas for the weights assumes that input functions have a convergent Fourier series and that the relevant Fourier coefficients form a hyperbolic cross. Accordingly, we have analyzed our algorithms' error for functions whose Fourier coefficients decay sufficiently fast such that they lie in Wiener algebras or Korobov spaces.
15.08.2023 um 14:15 Uhr in 69/125
Jan Heiner Wilken (Universität Osnabrück)
"L_2- Approximation: Standardinformation vs. lineare Information" (Masterkolloquium)