Abstract
Stability and regularization are key to successfully learn from sampled/noisy data. In this talk we show how different paradigms, beyond penalization, can be used to design learning algorithms that ensure stability and hence generalization. We first illustrate our approach for the problem of supervised function estimation and then for unsupervised support estimation. The different algorithms can be studied within a unified spectral regularization framework. Their analysis combines classical concepts from inverse problems and signal processing with concentration of measure results. Our study highlights connections between numerical and statistical stability.