You are here: Home Teaching Lecture Notes

Lecture Notes

On this page, we collect Lecture Notes that we have created in the context of ML2R and other teaching activities.

 ML2R Coding Nuggets

  1. Solving Linear Programming Problems by Pascal Welke and Christian Bauckhage.

    This note discusses how to solve linear programming problems with SciPy. As a practical use case, we consider the task of computing the Chebyshev center of a bounded convex polytope.

  2. Linear Programming for Robust Regression by Pascal Welke and Christian Bauckhage.

    Having previously discussed how scipy allows us to solve linear programs, we can study further applications of linear programming. Here, we consider least absolute deviation regression and solve a simple parameter estimation problem deliberately chosen to expose potential pitfalls in using scipy's optimization functions.

  3. Sorting as Linear Programming by Christian Bauckhage and Pascal Welke.

    Linear programming is a surprisingly versatile tool. That is, many problems we would not usually think of in terms of a linear programming problem can actually be expressed as such. In this note, we show that sorting is such a problem and discuss how to solve linear programs for sorting using SciPy.

  4. Sorting as Quadratic Unconstrained Binary Optimization Problem by Christian Bauckhage and Pascal Welke.

    Having previously considered sorting as a linear programming problem, we now cast it as a quadratic unconstrained binary optimization problem (QUBO). Deriving this formulation is a bit cumbersome but it allows for implementing neural networks or even quantum computing algorithms that sort. Here, however, we consider a simple greedy QUBO solver and implement it using Numpy.

  5. Hopfield Nets for Sorting by Christian Bauckhage and Nico Piatkowski.

    We show how to use Hopfield networks for sorting. We first derive a corresponding energy function, then present an efficient algorithm for its minimization, and finally implement our ideas in NumPy

  6. Hopfield Nets for Bipartition Clustering by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    We show that Hopfield networks can cluster numerical data into two salient clusters. Our derivation of a corresponding energy function is based on properties of the specific problem of 2-means clustering. Our corresponding NumPy code is short and simple.

  7. Hopfield Nets, Bipartition Clustering, and the Kernel Trick by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    We revisit Hopfield nets for bipartition clustering and show how to invoke the kernel trick to increase robustness and versatility. Our corresponding numpy code is short and simple.

  8. Unambiguous Bipartition Clustering with Hopfield Nets by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    We revisit Hopfield nets for bipartition clustering and tweak the underlying energy function such that it has a unique global minimum. In other words, we show how to remove ambiguity from the bipartition clustering problem. Our corresponding NumPy code is short and simple.

  9. Hopfield Nets for Hard Vector Quantization by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    We demonstrate that Hopfield networks can be used for hard vector quantization. To this end, we first formulate vector quantization as the problem of minimizing the mean discrepancy between kernel density estimates of two data distributions and then express it as a quadratic unconstrained binary optimization problem that can be solved by a Hopfield net. Our corresponding NumPy code is simple and consistently produces good results.

  10. Hopfield Nets for Max Sum Diversification by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    We demonstrate that Hopfield networks can tackle the max-sum diversification problem. To this end, we express max-sum diversifi- cation as a quadratic unconstrained binary optimization problem which can be cast as a Hopfield energy minimization problem. Since max-sum diversification is an NP-hard subset selection problem, we cannot guarantee that Hopfield nets will discover an optimal solution. Nevertheless, our simple NumPy implementation consistently produces good results.

  11. Hopfield Nets for Sudoku by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

    This note demonstrates that Hopfield nets can solve Sudoku puzzles. We discuss how to represent Sudokus in terms of binary vectors and how to express their rules and hints in terms of matrix-vector equations. This allows us to set up energy functions whose global minima encode the solution to a given puzzle. However, as these energy functions typically have numerous local minima, Hopfield nets with random selection or steepest descent updates rarely find the correct solution. We therefore consider stochastic Hopfield nets or Boltzmann machines whose neurons update according to a stochastic process called simulated annealing. Our corresponding NumPy code is comparatively simple and efficient and consistently yields good results.

  12. Numerically Solving the Schrödinger Equation (Part 1) by Christian Bauckhage.

    Most quantum mechanical systems cannot be solved analytically and therefore require numerical solution strategies. In this note, we consider a simple such strategy and discretize the Schrödinger equation that governs the behavior of a one-dimensional quantum harmonic oscillator. This leads to an eigenvalue / eigenvector problem over finite matrices and vectors which we then implement and solve using standard NumPy functions.

  13. Numerically Solving the Schrödinger Equation (Part 2) by Christian Bauckhage.

    We revisit the problem of numerically solving the Schrödinger equation for a one-dimensional quantum harmonic oscillator. We reconsider our previous finite difference scheme and discuss how higher order finite differences can lead to more accurate solutions. In particular, we will consider a five point stencil to approximate second order derivatives and implement the approach using SciPy functions for sparse matrices.

  14. Solving the Single Unit Oja Flow by Christian Bauckhage, Sebastian Müller and Fabrice Beaumont.

    Ojas’ rule for neural principal component learning has a continuous analog called the Oja flow. This is a gradient flow on the unit sphere whose equilibrium points indicate the principal eigenspace of the training data. We briefly discuss characteristics of this flow and show how to solve its differential equation using SciPy.

  15. Solving Least Squares Gradient Flows by Christian Bauckhage and Pascal Welke.

    We approach least squares optimization from the point of view of gradient flows. As a practical example, we consider a simple linear regression problem, set up the corresponding differential equation, and show how to solve it using SciPy.

  16. Reproducible Machine Learning Experiments by Lukas Pfahler, Alina Timmermann, and Katharina Morik.

    The scientific areas of artificial intelligence and machine learning are rapidly evolving and their scientific discoveries are drivers of scientific progress in areas ranging from physics or chemistry to life sciences and humanities. But machine learning is facing a reproducibility crisis that is clashing with the core principles of the scientific method: With the growing complexity of methods, it is becoming increasingly difficult to independently reproduce and verify published results and fairly compare methods. One possible remedy is maximal transparency with regard to the design and execution of experiments. For this purpose, best practices for handling machine learning experiments are summarized in this Coding Nugget. In addition, a convenient and simple library for tracking of experimental results, meticulous-ml [17], is being introduced in the final hands-on section.

ML2R Theory Nuggets

  1. Centering Data- and Kernel Matrices by Christian Bauckhage and Pascal Welke.

    We discuss the notion of centered data matrices and show how to compute them using centering matrices. As centering matrices have many applications in data science and machine learning, we have a look at one such application and discuss how they allow for centering kernel matrices.

  2. Computational Complexity of Max-Sum Diversification by Pascal Welke, Till Hendrik Schulz, and Christian Bauckhage.

    We show how max-sum diversification can be used to solve the $k$-clique problem, a well-known NP-complete problem. This reduction proves that max-sum diversification is NP-hard and provides a simple and practical method to find cliques of a given size using Hopfield networks.

  3. The Dual Problem of L2 Support Vector Machine Training by Christian Bauckhage and Rafet Sifa.

    We derive the dual problem of L2 support vector machine training. This involves setting up the Lagrangian of the primal problem and working with the Karush-Kuhn-Tucker conditions. As a payoff, we find that the dual poses a rather simple optimization problem that can be solved by the Frank-Wolfe algorithm.

Document Actions