Abstract

This talk considers the problems of distributed online prediction and optimization. Each node in a network of processors receives a stream of data in an online manner. Before the next data point (or mini-batch) arrives, the processor must make a prediction. Then, after receiving the next point, the processor accrues some loss. The goal of the processors is to minimize the total aggregate loss. We propose a consensus-based distributed optimization method for fitting a model used to make the predictions online. After observing each data point, nodes individually make gradient descent-like adjustments to their model parameters, and then consensus iterations are performed to synchronize models across the nodes. We prove that the proposed distributed method achieves the optimal centralized regret bound when the loss function has Lipschitz continuous gradients, and the amount of communication required depends on the network topology in the usual way, through the second smallest eigenvalue of the graph Laplacian. This is joint work with Konstantinos Tsianos.

Video Recording