Abstract
In the age of big data, communication is sometimes more of a bottleneck than computation. The main model of distributed computation used for machine learning for big data is the map-reduce model which gives the programmer a transparent way to scale up computation to a large number of processors. One drawback of the map-reduce model is its reliance on boundary synchronization and on shuffling. Both of these mechanisms create a communication bottleneck which becomes more severe the more computers are added to the cluster. In this talk I propose a new model of computation which I call “tell me something new”. TMSN is an asynchronous model of computation which optimizes communication. Rather than communicating in times dictated by a common clock, each processor broadcasts information when they have something important to say. I show how this TMSN can be used to distribute boosting, K-means++ and stochastic gradient descent. I will also show that TMSN is a natural fit for adaptive sensor networks.