Abstract

In this talk I describe some of our work on coded matrix multiplication when an approximate result will suffice. We are motivated by potential application in optimization and learning where an exact matric product is not required and one would prefer to get a lower-fidelity result faster. We are also motivated by developing rate-distortion analogs for coded computing and were particularly inspired by a recent JSAIT paper by Jeong et al. "epsilon-coded-computing" wherein the authors show that they can recover an intermediate, approximate, result half-way to exact recovery. In this talk I'll build on that prior work to show how to realize schemes in which there are multiple stages of recovery (more than one) en-route to exact recovery. In analog to successive refinement in rate distortion we terms this "successive approximation" coding.

Video Recording