Parameterized Algorithms (WS 2011/12)
The lab course will be held
Friday 14:00-15:30, Room 6329 at the i9 institute (same building but in the top floor)
Many practical problems are NP-hard and are considered "intractable" by complexity theory, meaning that there is probably no polynomial time algorithm to solve the problem exactly on all instances.
Traditionally one way to deal with this difficulty is to resort to approximate solutions while retaining the polynomial running time holy cow.
Sometimes, however, exact solutions are absolutely needed in praxis and tools like CPLEX as well as specialized optimization software show that hard problems can often be solved practically. One reason for this paradox might be that the hard instances that provide for the NP-hardness of the problem are not among those that occur in practice, which is not very surprising if you have looked at several NP-completeness proofs.
Classical complexity theory only takes notice of the length of an instance and measures the runtime or other resource as a function of this length. In parameterized complexity an additional parameter that is a function of the instance is used as well to measure how hard the instance at hand really is. The running is the a function on the input length as well as its parameter. A good parameterized algorithm has a running time that is polynomial in the input length and moderately superpolynomial in the parameter. In that way the running time can be quite practible when restricted to inputs with a small parameter.
Often we can model problems using a parameter that is small for practical purposes and find an algorithm that is fast if the parameter is indeed small. Take for an example the data cleaning problem: You have n data samples from some physical experiment. Due to measurement errors some data points make no sense. To be more precise: there can be a pair of data points that contradict themselves. The goal is to delete as few data points as possible to get rid of all conflicts. As a parameterized problem we can state this as follows: The input consists of n data points and a number k. The parameter is k. The questions is whether we can delete up to k data points to make the data consistent.
In practice n can be very big, but we expect k to be quite small. This problem is NP-complete, but we can easily find an algorithm that runs in time O(2^k*n), which is quite fast for practical values of k.
In this lecture you will learn how the basic techniques to design such parameterized algorithm and about the limits of the approach.
PrerequisitesBasic knowledge in the design of efficient algorithm and their analysis. A bachelors degree in computer science or a related field.
SlidesHere are some of the slides we used in the lecture.
BibliographyThere are some good textbooks on parameterized algorith that you can find in out library.
- R. G. Downey and M. R. Fellows. Parameterized Complexity. Springer-Verlag, 1999.
- J. Flum and M. Grohe. Parameterized Complexity Theory. Springer-Verlag, 2006.
- R. Niedermeier. Invitation to Fixed Parameter Algorithms. Oxford University Press, 2006.
Exercises and Solutions
- Exercise 01 Solution 01
- Exercise 02 Solution 02
- Exercise 03 Solution 03 Errata: in exercise H6, the set of forbidden subgraphs should contain three graphs. Besides the two provided ones, the cycle on five vertices also belongs to it.
- Exercise 04 Solution 04
- Exercise 05 Solution 05
- Exercise 06 Solution 06
- Exercise 07 Solution 07
- Exercise 08 Solution 08
- Exercise 09 Solution 09
- Exercise 10 Solution 10
- Exercise 11
- Exercise 12 Solution 12
The first exam will be held on February 08th, 9:00-11:30 in room 5056.
The repeat exam will be held on March 28th, 9:00-11:30 in the seminar room of i1 (4017)
More more information check out the CAMPUS entry.To see whether you are admitted to the exam, hash your matriculation number with md5 and see whether the hash appears in the list below.