Description

Suppose x is an unknown n-bit string and the algorithm has access to random traces of x -- here each trace is generated by deleting every bit of x independently with some fixed probability (say 0.8). The task is to reconstruct x. Despite much interest in this problem, even the sample complexity  of this problem remains unresolved. In particular, the best known algorithm has exp(n^{1/3}) sample and time complexity whereas the best known lower bound  on sample complexity is barely superlinear. In light of the state of the art for worst case reconstruction, there has also been extensive work on the average case version of the problem -- i.e., when the unknown x is randomly generated. For average case trace reconstruction, the state of the art is a linear time algorithm with exp ((log n)^{1/3}) sample complexity.

In this work, we investigate the complexity of trace reconstruction in a world which bridges worst case and average case analysis. Namely, in the smoothed analysis model of Spielman and Teng, we show that trace reconstruction has a polynomial time time and sample complexity. Key to this result is a new (polynomial interpolation based) procedure which shows how one can compute frequencies of short subwords from the traces of x.

Joint work with Xi Chen, Chin Ho Lee, Rocco Servedio and Sandip Sinha.

YouTube Video
Remote video URL

All scheduled dates:

Upcoming

No Upcoming activities yet