Abstract

Classical results starting with [Lipton92] showed that it is possible to construct better error-correcting codes when assuming that the adversary is computationally bounded. Specifically, it is possible to construct codes with better rates than what is possible information-theoretically (when the adversary is computationally unbounded). In this talk, I will present new results in this area, together with connections to multi-input correlation intractable hash functions.

I will also briefly mention some results in complexity and derandomization. Specifically, I will discuss recent progress on constructing seedless extractors for samplable distributions and their relationship to new notions of hardness in complexity theory.