We discuss lower and upper bounds on exact algorithms for different graph problems.

### Monday, November 2nd, 2015

In this talk we discuss parameterized and promised streaming.

The first part of the talk focuses on algorithms for Max 2-CSP. A new lower bound for the algorithm by Scott and Sorkin (2007) shows that the existing running time analysis is tight. The lower bound seems to apply to a broad class of branching algorithms, but can be overcome by exploiting global properties of an instance. In particular, the running time upper bound has been improved (Gaspers, Sorkin, 2015) by using graph separators to guide the branching.

The second part of the talk discusses the time complexity of some enumeration problems. It surveys results for enumerating extremal vertex sets in graphs and presents new results for the number of minimal separators in graphs (Gaspers, Mackenzie, 2015) and the number of minimal transversals of rank-k hypergraphs (Cochefert, Couturier, Gaspers, Kratsch, 2015).

### Tuesday, November 3rd, 2015

This talk offers an introduction to the fine-grained complexity of counting problems. We review some known results from the polynomial time world, and we survey some known exact algorithms as well as hardness results under the counting version of the exponential time hypothesis (#ETH). Some problems we discuss are the permanent, computing the number of perfect matchings, counting satisfying assignments of k-CNF formulas, evaluating the chromatic and the Tutte polynomial, and counting solutions in constraint satisfaction problems (CSPs). On the hardness side, we delineate the block interpolation framework due to Curticapean (2015), which allows us to use polynomial interpolation to prove #ETH-hardness results.

The Subset Sum problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for Subset Sum by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-\delta)n})-time algorithm, with some constant \delta > 0. We report some partial progress on this question as outlined in http://arxiv.org/pdf/1508.

Specifically, we study Subset Sum parameterized by the maximum bin size \beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every \epsilon > 0 we give a truly faster algorithm for instances with \beta \leq 2^{(0.5-\epsilon)n}, as well as instances with \beta \geq 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/\log_2 t: if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance.

Joint work with Per Austrin, Petteri Kaski and Mikko Koivisto.

Given a collection of n subsets over a finite set of elements {1,2,…, m}, the goal of Maximum k Subset Intersection problem is to select k subsets whose intersection has maximum cardinality. In this talk, we show that, under ETH, this problem has no f(k)n^{o{\sqrt{k}}}-time approximation algorithm within ratio n^{o(1/\sqrt{k})} for any computable function f.

We discuss several challenges concerning (missing) lower bounds for fixed-parameter tractability results for problems arising in voting and related areas.

An instance of the constraint satisfaction problem with $n$ variables over a domain of $d$ values can be solved by brute-force in $d^n$ steps (omitting a polynomial factor). In this talk, we investigate the existence of subexponential-time algorithms, that is, algorithms running in $d^{o(n)}$ steps, for various natural restrictions of the constraint satisfaction problem. We discuss both the constraint satisfaction problem in which all the constraints are given extensionally as tables, and that in which all the constraints are given intensionally in the form of global constraints. We provide tight characterizations of the subexponential-time complexity of the aforementioned problems with respect to several natural structural parameters, which allows us to draw a detailed landscape of the subexponential-time complexity of the constraint satisfaction problem. Our analysis provides results indicating whether and when one can significantly improve on the brute-force search approach for solving the constraint satisfaction problem.

### Wednesday, November 4th, 2015

In a vertex subset problem we are given as input a universe U of size n, and a family F of subsets of the universe defined implicitly from the input. The task is to find a subset S in F of smallest possible size. For an example the Vertex Cover problem is a subset problem where input is a graph G, the universe is the vertex set of G, and the family F is the family of all vertex covers of G. Here a vertex set S is a vertex cover of G if every edge of G has at least one endpoint in S. Many problems, such as Vertex Cover, Feedback Vertex Set, Hitting Set and Minimum Weight Satisfiability can be formalized as vertex subset problems. The trivial algorithm for such problems runs in time 2^n. We show that (essentially) any vertex subset problem that admits an FPT algorithm with running time c^kn^O(1), where c is a constant and k is the size of the optimal solution, also admits an algorithm with running time (2-1/c)^n. In one stroke this theorem improves the best known exact exponential time algorithms for a number of problems, and gives tighter combinatorial bounds for several well-studied objects. The most natural variant of our algorithm is randomized, we also show how to get a deterministic algorithm with the same running time bounds, up to a sub-exponential factor in the running time. Our de-randomization relies on a new pseudo-random construction that may be of independent interest.

(Joint work with Serge Gaspers, Fedor Fomin and Saket Saurabh)

The study of counting problems has become a classical subfield of computational complexity theory since Valiant's seminal paper introduced the complexity class #P and proved that counting perfect matchings is complete for this class. Since then, counting problems have been refined along various dimensions, including approximate counting, counting on restricted graph classes, and counting modulo a fixed number. In this talk, we consider some of the most recent refinements, namely, the parameterized complexity and the exponential-time complexity of counting problems.

First, we will consider various parameterizations of the problem of counting perfect matchings, together with lower bounds under the counting exponential-time hypotheses #ETH and #SETH.

Then we turn our attention to counting general subgraphs H in a host graph G, parameterized by the size of H. This gives rise to the problems #Sub(C) for fixed graph classes C: For inputs H and G, where H is from C, we wish to count H-copies in G. We show that #Sub(C) is either polynomial-time solvable or #W[1]-complete.

Finally, we present block interpolation, a general framework for proving lower bounds under #ETH. This gives tight lower bounds for counting (perfect) matchings, vertex-covers, evaluating the Tutte polynomial, and various other problems.

### Thursday, November 5th, 2015

In this talk we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parametierized com-plexity. However, as opposed to the original notion of kernelization, our definitions combine nicely with approximation algorithms and heuristics. The key new definition is that of a polynomial size a-approximate kernel.

Narrow sieves, representative sets and divide-and-color are three breakthrough techniques related to color coding, which led to the design of extremely fast parameterized algorithms. In this talk, I will discuss the power and limitations of these techniques. I will also briefly address some recent developments related to these techniques, including general schemes for mixing them.

Dynamic programming is often useful to speed up FPT algorithms. It uses, however, often a lot of memory. We show that the high space complexity is unavoidable by tight lower bounds on several problems.

No abstract available.

In the Graph Motif problem, we are given as input a vertex-colored graph $H$ (the host graph) and a multiset of colors $M$ (the motif). Our task is to decide whether $H$ has a connected set of vertices whose multiset of colors exactly agrees with $M$.

The worst-case time complexity of the Graph Motif problem is well-understood in a fairly tight sense:

1. Assuming that the host graph has $m$ edges and that the motif has size $k$, there exists a randomized algorithm that solves the problem in time $O(2^k k^2 (log k)^2 m)$.

2. Unless there is a breakthrough in the time complexity of the Set Cover problem, the Graph Motif problem does not admit an algorithm that runs in time $O^*((2-\epsilon)^k)$ for any constant $\epsilon>0$.

This talk reviews theory and engineering effort that resulted in an open-source algorithm implementation that scales in practice (as long as $k$ remains small) to graphs with hundreds of millions of edges on a single compute node.

Joint work with Andreas Björklund, Łukasz Kowalik, and Juho Lauri.

### Friday, November 6th, 2015

We give improved deterministic algorithms solving MAX-SAT on instances with cn clauses, achieving running time 2^{n-\Omega(n/c)} with a polynomial-space algorithm, and running time 2^{n-\Omega(n/\sqrt{c}} with an exponential-space algorithm.

We discuss application of the ETH to obtain tight hardness results for a range of hard graph problems such as vertex-partitioning problems.

In this talk, I will explain the miniaturization mapping between parameterized problems and classical problems. And I will show that it establishes the equivalence not only between parameterized time complexity and exponential time complexity but also between parameterized space complexity and the so-called linear space complexity. Therefore many lower bounds in parameterized complexity can be translated to classical complexity, and vice versa.

In this talk, I will report satisfiability algorithms for (1) systems of degree-k polynomial equations over finite fields, (2) systems of polynomial equations over GF[2] and (3) bounded-depth unbounded-fan-in circuits with AND, OR, NOT, MOD p gates (ACC^0[p] circuits). (1), (2) and (3) are generalizations of the satisfiability problems of k-CNF, CNF and AC^0 respectively and the running time of each algorithm is still almost the same as that of the current best algorithm for each corresponding problem.

Joint work with Daniel Lokshtanov, Ramamohan Paturi and Ryan Williams.

Channel Assignment is a problem of assigning frequencies (integer numbers) to radio emitters (vertices of a graph) such that the emitters are not interfering each other (weight on edges represent minimum allowed frequency differences) and the span of assigned frequencies is minimum.

Subgraph Isomorphism is a well known problem of checking if one graph is isomorphic with a subgraph of another.

Both of them have brute force O*(n!) algorithms. In both cases this is optimal. It turns out that the proofs of the lower bounds of these quite unrelated problems can share some techniques.