Summer Semester 2015
Algorithmic Learning Theory
Content
The main goal of algorithmic learning theory is a framework for analyzing algorithms that, given a set of examples $(\mathbf{x}_i, y_i)_{i = 1}^{n}$, computes a function $f$ such that $f(x)$ is an effective estimate of the output $y$ when a new input $x$ is given. The function $f$ is chosen from a space of functions, called the hypothesis space , encoding some apriori knowledge of the relationship between $\mathbf{x}$ and $y$.
The goal of this seminar course is to understand some of the more important results from algorithmic learning theory. The objective is not to present formal mathematical proofs but a high-level description of some of the main ideas of this important area. Some of the topics that we plan to cover are:
- Language Identification in the Limit. This seminal work by E. Mark Gold initiated the field of machine learning.
- Inductive Inference of Formal Languages from Positive Data. This is another early work (by D. Angluin) that deals with inferring formal languages from positive data.
- A Theory of the Learnable. Another seminal work by Valiant that introduces the Probably Approximately Correct (PAC) learning framework.
- Learning Pattern Languages: Average case analysis. This set of results concerns the expected total learning time taken until convergence to a correct hypothesis.
- The Bayesian Learning Framework.
- Perceptron Learning.
- Neural Networks and their Applications to Machine Learning.
Prerequisites
We expect participants to have some background in algorithms and discrete mathematics (including graph theory, linear algebra, and probability).- Each participants will be assigned a topic which they have to present during the seminar.
- Participants have to present a summary of their study in a report. Use this file .
Slides from the kick-off meeting
Schedule
Please note that owing to so many holidays on Thursdays this semester, starting 18th June, we will have two talks per day. Each talk should be around 35 mins with 10 mins for questions.- 30th April. Learning Languages in the Limit. Ahmet Altinbüken.
- 7th May. Finding Patterns Common to a Set of Strings. Michael Krause.
- 21st May. Learning One-Variable Pattern Languages Very Efficiently on Average, in Parallel, and by Asking Queries. Till Hofmann.
- 11th June. Bayesian Inference: An Introduction to the Principles and Practice in Machine Learning.
- 18th June.
- Talk 1: A Theory of the Learnable (PAC Learning).
- Talk 2: Learning DNF Expressions from Fourier Spectrum. Jan Dreier.
- 25th June.
- Neural Networks.
- 2nd July.
- Universal AI. Michael Vu.
- 9th July.
- Talk 1: Hopfield Networks.
- Talk 2: Learning Topic Models: Going Beyond SVD.