Abstract

The line of works studying uniform hardness vs randomness deduces derandomization that succeeds on average (rather than in the worst-case) from relatively mild hardness assumptions (i.e., worst-case hardness for probabilistic algorithms). In contrast to the more well-known variants of hardness vs randomness, the results in this line of works seem far from optimal: The required hypotheses are believed to be too strong (e.g., prior to this work, hardness in SPACE[n] rather than in E=DTIME[2^{O(n)}]), and on top of that, the conclusions are also believed to be too weak (e.g., prior to this work, derandomization in superpolynomial time).

I'll present a recent work that makes significant progress on this front, simultaneously bypassing several classical obstacles. For example, we deduce polynomial-time average-case derandomization of RP from the following hardness assumption: There is a function computable by logspace-uniform circuits of size 2^{O(n)} and depth 2^{o(n)} that is hard for probabilistic algorithms running in time 2^{eps*n}, for some eps > 0. Along the way to this result we prove an optimal worst-case to average-case reduction for computing problems in linear space by uniform probabilistic algorithms.

In addition, we prove that polynomial-time average-case derandomization of RP follows from weak fine-grained hardness assumptions. For example, such derandomization follows if, for every sufficiently large constant k, counting k-cliques is hard for probabilistic algorithms running in time n^{log*(k)}.

Based on a joint work with Lijie Chen and Ron Rothblum.

Attachment

Video Recording