##### Sections
You are here: Home Lecture Notes

# Lecture Notes

On this page, we collect Lecture Notes that we have created in the context of ML2R and other teaching activities.

## ML2R Coding Nuggets

1. Solving Linear Programming Problems by Pascal Welke and Christian Bauckhage.

This note discusses how to solve linear programming problems with SciPy. As a practical use case, we consider the task of computing the Chebyshev center of a bounded convex polytope.

2. Linear Programming for Robust Regression by Pascal Welke and Christian Bauckhage.

Having previously discussed how scipy allows us to solve linear programs, we can study further applications of linear programming. Here, we consider least absolute deviation regression and solve a simple parameter estimation problem deliberately chosen to expose potential pitfalls in using scipy's optimization functions.

3. Sorting as Linear Programming by Christian Bauckhage and Pascal Welke.

Linear programming is a surprisingly versatile tool. That is, many problems we would not usually think of in terms of a linear programming problem can actually be expressed as such. In this note, we show that sorting is such a problem and discuss how to solve linear programs for sorting using SciPy.

4. Sorting as Quadratic Unconstrained Binary Optimization Problem by Christian Bauckhage and Pascal Welke.

Having previously considered sorting as a linear programming problem, we now cast it as a quadratic unconstrained binary optimization problem (QUBO). Deriving this formulation is a bit cumbersome but it allows for implementing neural networks or even quantum computing algorithms that sort. Here, however, we consider a simple greedy QUBO solver and implement it using Numpy.

5. Numerically Solving the Schrödinger Equation (Part 1) by Christian Bauckhage.

Most quantum mechanical systems cannot be solved analytically and therefore require numerical solution strategies. In this note, we consider a simple such strategy and discretize the Schrödinger equation that governs the behavior of a one-dimensional quantum harmonic oscillator. This leads to an eigenvalue / eigenvector problem over finite matrices and vectors which we then implement and solve using standard NumPy functions.

6. Numerically Solving the Schrödinger Equation (Part 2) by Christian Bauckhage.

We revisit the problem of numerically solving the Schrödinger equation for a one-dimensional quantum harmonic oscillator. We reconsider our previous finite difference scheme and discuss how higher order finite differences can lead to more accurate solutions. In particular, we will consider a five point stencil to approximate second order derivatives and implement the approach using SciPy functions for sparse matrices.

7. Solving the Single Unit Oja Flow by Christian Bauckhage, Sebastian Müller and Fabrice Beaumont.

Ojas’ rule for neural principal component learning has a continuous analog called the Oja flow. This is a gradient flow on the unit sphere whose equilibrium points indicate the principal eigenspace of the training data. We briefly discuss characteristics of this flow and show how to solve its differential equation using SciPy.

8. Solving Least Squares Gradient Flows by Christian Bauckhage and Pascal Welke.

We approach least squares optimization from the point of view of gradient flows. As a practical example, we consider a simple linear regression problem, set up the corresponding differential equation, and show how to solve it using SciPy.

9. Reproducible Machine Learning Experiments by Lukas Pfahler, Alina Timmermann, and Katharina Morik.

The scientific areas of artificial intelligence and machine learning are rapidly evolving and their scientific discoveries are drivers of scientific progress in areas ranging from physics or chemistry to life sciences and humanities. But machine learning is facing a reproducibility crisis that is clashing with the core principles of the scientific method: With the growing complexity of methods, it is becoming increasingly difficult to independently reproduce and verify published results and fairly compare methods. One possible remedy is maximal transparency with regard to the design and execution of experiments. For this purpose, best practices for handling machine learning experiments are summarized in this Coding Nugget. In addition, a convenient and simple library for tracking of experimental results, meticulous-ml, is being introduced in the final hands-on section.

10. AdaBoost with Pre-Trained Hypotheses by Christian Bauckhage.

In preparation for things to come, we discuss the general ideas behind AdaBoost (for binary classifier training) and present efficient NumPy code for boosting pre-trained weak hypotheses.

11. Intersection String Kernels for Language Processing by Christian Bauckhage.

This is the first in a miniseries of notes on kernel methods for language processing. We discuss the idea of measuring n-gram similarities of words by computing intersection string kernels and demonstrate that the Python standard library allows for compact implementations of this idea.

12. Kernel PCA for Word Embeddings by Christian Bauckhage.

We address the general problem of computing word embeddings and discuss a simple yet powerful solution involving intersection string kernels and kernel principal component analysis. We discuss the theory behind kernel PCA for word embeddings and present corresponding Python / NumPy code. Overall, we demonstrate that the whole framework is very easy to implement.

13. SVM Training Using 16 Lines of Plain Vanilla NumPy Code by Christian Bauckhage.

We consider L2 support vector machines for binary classification. These are as robust as other kinds of SVMs but can be trained almost effortlessly. Indeed, having previously derived the corresponding dual training problem, we now show how to solve it using the Frank-Wolfe algorithm. In short, we show that it requires only a few lines of plain vanilla NumPy code to train an SVM.

14. Greedy Set Cover with Native Python Data Types by Christian Bauckhage.

In preparation for things to come, we discuss a plain vanilla Python implementation of “the” greedy approximation algorithm for the set cover problem.

15. Greedy Set Cover with Binary NumPy Arrays by Christian Bauckhage.

We revisit the minimum set cover problem and formulate it as an integer linear program over binary indicator vectors. Next, we simply adapt our earlier code for greedy set covering to indicator vector representations.

### Hopfield Networks

1. Hopfield Nets for Sorting by Christian Bauckhage and Nico Piatkowski.

We show how to use Hopfield networks for sorting. We first derive a corresponding energy function, then present an efficient algorithm for its minimization, and finally implement our ideas in NumPy

2. Hopfield Nets for Bipartition Clustering by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

We show that Hopfield networks can cluster numerical data into two salient clusters. Our derivation of a corresponding energy function is based on properties of the specific problem of 2-means clustering. Our corresponding NumPy code is short and simple.

3. Hopfield Nets, Bipartition Clustering, and the Kernel Trick by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

We revisit Hopfield nets for bipartition clustering and show how to invoke the kernel trick to increase robustness and versatility. Our corresponding numpy code is short and simple.

4. Unambiguous Bipartition Clustering with Hopfield Nets by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

We revisit Hopfield nets for bipartition clustering and tweak the underlying energy function such that it has a unique global minimum. In other words, we show how to remove ambiguity from the bipartition clustering problem. Our corresponding NumPy code is short and simple.

5. Hopfield Nets for Hard Vector Quantization by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

We demonstrate that Hopfield networks can be used for hard vector quantization. To this end, we first formulate vector quantization as the problem of minimizing the mean discrepancy between kernel density estimates of two data distributions and then express it as a quadratic unconstrained binary optimization problem that can be solved by a Hopfield net. Our corresponding NumPy code is simple and consistently produces good results.

6. Hopfield Nets for Max Sum Diversification by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

We demonstrate that Hopfield networks can tackle the max-sum diversification problem. To this end, we express max-sum diversifi- cation as a quadratic unconstrained binary optimization problem which can be cast as a Hopfield energy minimization problem. Since max-sum diversification is an NP-hard subset selection problem, we cannot guarantee that Hopfield nets will discover an optimal solution. Nevertheless, our simple NumPy implementation consistently produces good results.

7. Hopfield Nets for Sudoku by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

This note demonstrates that Hopfield nets can solve Sudoku puzzles. We discuss how to represent Sudokus in terms of binary vectors and how to express their rules and hints in terms of matrix-vector equations. This allows us to set up energy functions whose global minima encode the solution to a given puzzle. However, as these energy functions typically have numerous local minima, Hopfield nets with random selection or steepest descent updates rarely find the correct solution. We therefore consider stochastic Hopfield nets or Boltzmann machines whose neurons update according to a stochastic process called simulated annealing. Our corresponding NumPy code is comparatively simple and efficient and consistently yields good results.

8. Hopfield Nets for Set Cover by Christian Bauckhage.

We once again revisit the minimum set cover problem and show that the underlying integer linear program can be rewritten as a quadratic unconstrained binary optimization problem. This can then be cast as an energy minimization problem which we solve by running Hopfield nets. Using multiple restarts, our simple NumPy implementation consistently produces good results.

9. Hopfield Nets for Subset Sum by Christian Bauckhage, Fabrice Beaumont, and Sebastian Müller.

This note discusses a QUBO formulation of the subset sum prob- lem. Cast as a QUBO, this combinatorial problem can be tackled using Hopfield nets. However, as subset sum is NP-complete, we cannot guarantee our Hopfield nets to always discover an optimal solution. Nevertheless, using multiple restarts, our simple NumPy implementation quickly and consistently finds valid solutions.

## ML2R Theory Nuggets

1. Centering Data- and Kernel Matrices by Christian Bauckhage and Pascal Welke.

We discuss the notion of centered data matrices and show how to compute them using centering matrices. As centering matrices have many applications in data science and machine learning, we have a look at one such application and discuss how they allow for centering kernel matrices.

2. Computational Complexity of Max-Sum Diversification by Pascal Welke, Till Hendrik Schulz, and Christian Bauckhage.

We show how max-sum diversification can be used to solve the \$k\$-clique problem, a well-known NP-complete problem. This reduction proves that max-sum diversification is NP-hard and provides a simple and practical method to find cliques of a given size using Hopfield networks.

3. The Dual Problem of L2 Support Vector Machine Training by Christian Bauckhage and Rafet Sifa.

We derive the dual problem of L2 support vector machine training. This involves setting up the Lagrangian of the primal problem and working with the Karush-Kuhn-Tucker conditions. As a payoff, we find that the dual poses a rather simple optimization problem that can be solved by the Frank-Wolfe algorithm.

##### Document Actions 