Abstract

"Kernelization" is an influential paradigm for attacking hard computational problems. Kernelization reductions efficiently preprocess their input to “shrink” it to a smaller, equivalent one. This can be a valuable first step before applying exhaustive-search techniques.

After giving the basic framework of kernelization, I’ll describe my contribution to a line of works showing limits to the power of kernelization for many NP problems (under complexity-theoretic assumptions). To this end, I show that if such problems have a sufficiently strong kernelization reduction, the information bottleneck in the reduction can be exploited to give proofs of unsatisfiability for Boolean formulas.

Based on: [A. Drucker, New Limits to Classical and Quantum Instance Compression. FOCS’12]

Video Recording