Abstract

Randomized algorithms have shown great success in many large-scale applications. However, beyond algorithmic advantages, randomization can offer insight in understanding many data scientific models. In this talk, we discuss two such examples.

In the context of analyzing neural networks, we show how randomized analysis can be used to answer questions such as “why training neural networks gets harder as the depth of the network increases?” and “what could potentially be appropriate strategies for initialization of weights in the training process?”

We then discuss an example where such style of analysis can help study graph spectral clustering algorithms. In particular, we show that, under certain assumptions, given an existing clustering, with high probability, one can efficiently and yet accurately, perform an out-of-sample extension to observations not seen in the initial clustering procedure.

Video Recording